What are the best practices? (Has Best Practice)

From The Embassy of Good Science
Available and relevant practice examples (max. 400 words)


  • ⧼SA Foundation Data Type⧽: Text
Showing 66 pages using this property.
[
Open data practices can help increase transparency, allowing other researchers and interested parties to undertake their own analyses. A technique to identify and classify spin in RCT reports has been developed by Boutron et al,'"`UNIQ--ref-0000029A-QINU`"''"`UNIQ--ref-0000029B-QINU`"' focusing on RCTs reporting statistically nonsignificant primary outcomes because the interpretation of these results is more likely to be subject to prior beliefs of effectiveness, leading to potential bias in reporting. Similar approaches are available to systematically assess the explicit presentation of nonsignificant results in trial reports in various subspecialties, such as described by Lockyer et al, and Turrentine. '"`UNIQ--ref-0000029C-QINU`"''"`UNIQ--ref-0000029D-QINU`"' '"`UNIQ--references-0000029E-QINU`"'  +
Institutions and journals need to have clear guidelines on publication and authorship in place. Guidelines should involve a section about gaining consent from all authors before submitting a manuscript or grant proposal. The Forum from COPE suggests that journals should send acknowledgements to all listed authors, not just the corresponding author, upon receiving a manuscript.'"`UNIQ--ref-0000038A-QINU`"' '"`UNIQ--references-0000038B-QINU`"'  +
Concern for research collaborators and those involved in research forms an important tenet of the ECoC. <sup>4</sup> In the spirit of respect and collegiality, it is essential that decisions regarding benefits and burdens be made after sufficient deliberation with the different teams. According to the ECoC, all involved partners should agree in advance on important aspects of the research, such as the goals and outcomes. <sup>4</sup> The attribution of credits (such as authorships) also form important benefits, and should be decided in consultation with all collaborators. The Montreal Statement on Research Integrity in Cross-Boundary Research Collaborations <sup>5</sup> states that all involved partners should reach an agreement at the outset, and later as needed, as to how the outcomes of the research, research data and authorship and publication responsibilities will be handled. The Committee on Publication Ethics (COPE) also offers best practice guidelines on how to handle authorship disputes, should they arise. <sup>6</sup>  +
Considering fake review, there are several strategies journals can implement to overcome the challenges. A first strategy is not accepting the requests of peer reviewers from the authors. The reviewers are chosen by the journal editors, and ensure there are no ‘fake reviewers’. However, many journals cannot find (enough) peer reviewers, and granting the request can be time saving for journals.'"`UNIQ--ref-00000381-QINU`"' At times, journals need to rely on the requests of authors to find peer reviewers at all. A second strategy is implementing an easy system that verifies reviewers. One online platform created to facilitate verification is Publons.'"`UNIQ--ref-00000382-QINU`"''"`UNIQ--ref-00000383-QINU`"' Here, journal editors can do background checks on the reviewers, and easily check their contributions in the field. In addition, reviewers get recognition for their reviews, even if these are anonymous. '"`UNIQ--ref-00000384-QINU`"' <br /> '"`UNIQ--references-00000385-QINU`"'  +
A working paper by [https://www.leru.org/files/LERU-PPT_Bias-paper_Jadranka_Gvozdanovic_January_19_18.pdf LERU] sets out the following recommendations:'"`UNIQ--ref-000001A4-QINU`"' #"Universities and other research institutions need to have regular '''monitoring''' in place to examine whether their organisational structures and processes are susceptible to a potentially biased access to resources that cannot be justified by the meritocratic principle. If so, they should develop and implement a plan to mitigate any identified bias. It is crucial that the university’s leadership commits to this plan, sees it through with appropriate encouragement, support and initiatives, throughout the organisation. Clear '''accountability''' should be assigned, with final responsibility for action resting with the President/Rector and the governing body. #Universities and other research institutions should examine crucial areas of potential bias and define '''measures''' for countering bias. Progress needs to be monitored and, if necessary, measures re-examined and adjusted. #Universities and other research institutions should gather expertise and organise '''gender bias training''' in various formats, including the possibility of anonymous training. There is no shortage of national and international resources which organisations can use. #'''Recruitment''' and/or '''funding processes''' should be as open and transparent as possible and be genuinely merit-based. This includes measures such as briefing selection committees about bias pitfalls, deciding on clear selection criteria at the outset, letting '''external observers''' monitor the selection process and involving external evaluators. #There should be close monitoring of potential '''bias in language''' used in recruitment processes. #Universities should undertake action towards eliminating the '''pay gap''' and monitor progress, examining bias as a contributing factor to pay gap. #Employees should be compensated for '''parental leave''', making sure the process is bias-free, for example by extending fixed-term positions or calculating the leave administratively as active service, yet exempt from publication expectations. #Universities and other research institutions should monitor '''precarious contracts''' and '''part-time positions''' for any gender-based differences and correct any inequalities. Universities should examine conditions for part- time positions for professors and their gendered division. #Universities and other research institutions should undertake '''positive action''' towards a proper representation of women in all leading positions, making sure that leadership and processes around leadership are free from bias." '"`UNIQ--references-000001A5-QINU`"'  
The International Committee of Medical Journal Editors (ICMJE) provides recommendations for defining the roles of authors and contributors. The ICMJE recommends the four main criteria that should be taken into account for authorship. These criteria include a) substantial contribution related to the study design, data collection, data analysis, and data interpretation, and b) drafting and critically revising the work, and c) approval for the final version for publication, and d) accountability for all aspects of the work, including its integrity '"`UNIQ--ref-000006BF-QINU`"'. The ICMJE emphasizes that those who meet all four criteria should be assigned as authors and provides guidance for acknowledging those who do not meet all of the above-mentioned criteria but still contributed to the study and whose contribution should be acknowledged. The Contributor Roles Taxonomy (CRediT) is another example of guidance for avoiding authorship malpractices and disputes '"`UNIQ--ref-000006C0-QINU`"'. CRediT statement contains 14 items related to the authors’ contributions. For example, some of the items included in the statement are the authors’ contributions in conceptualization, methodology, analysis, writing and editing the manuscript, visualization, supervision, etc. Many publishers have already adopted the CRediT taxonomy and encourage authors to use it when providing authors contributions during the manuscript submission process '"`UNIQ--ref-000006C1-QINU`"'.  +
It is difficult to cope with negative criticism, especially when it’s hostile in nature. Always keep in mind that any reviewer is a person, just like you.'"`UNIQ--ref-00000251-QINU`"' Maybe they were burdened with work, maybe they had a bad day at the office. It is nothing personal, and can happen to anybody. Think of anything useful that you can take from such a review. Maybe there is advice hidden under that unnecessary criticism? Speak with your superior, talk to your mentor. If you both consider that the review is insulting, consider raising that topic with the editor. '"`UNIQ--references-00000252-QINU`"'  +
A lot has been said about authorship. One of the milestones in tackling authorship are the famous four criteria of the International Committee of Medical Journal Editors. That means that those who fulfil the ICMJE criteria should be listed as authors (to avoid not giving credit when credit is due and to avoid ghost-writers), and authors should fulfil all of those criteria (to avoid guest and honorary authorship). Researchers who fulfil some, but not all four criteria should be acknowledged in the manuscript. When submitting research manuscript, journals will often ask for the statement of authorship, signed by authors. That way, journals’ editors want to make sure all authors have been informed, and they can be held accountable if any problem arises.  +
Variety of journals, such as [https://journals.plos.org/plosone/s/submission-guidelines PLOS ONE], [https://thelancet.com/pb/assets/raw/Lancet/authors/tlrm-info-for-authors.pdf The Lancet] or [https://www.nature.com/nature/for-authors/supp-info Nature], request complete disclosure and transparency from authors, so by not acknowledging your contributors you are disregarding the principle of transparency. This also means that you are not being completely honest because you do not acknowledge that someone has done a certain amount of work for you. Some authors even use the help of professional writers who, for example, may substantially contribute to drafting or write a full first draft of the manuscript'"`UNIQ--ref-000006A2-QINU`"'. In such a case, authors should also acknowledge the contribution and obtain a written permission from those named in acknowledgments.'"`UNIQ--ref-000006A3-QINU`"' '"`UNIQ--references-000006A4-QINU`"'  +
For successful collaboration it is necessary to '"`UNIQ--ref-00000565-QINU`"': *''Address mutual expectations''. Each team member may have different expectations about their contribution and the recognition they will receive. If you discuss these expectations openly, it will be easier for each team member to contribute effectively to the project. *''Clearly divide and define who is responsible for what task.'' Similar to expectations, a clear division of labor makes each team member's role in the project clear. This facilitates conversations about authorship. *''Determine authorship.'' In a collaborative effort, it may appear that each person has a clear role. However, this assumption can lead to confusion and disagreement about initial authorship. Agree on authorship at the beginning of the project. *''Communicate frequently.'' Ensure open communication with the team. If you do not have a clear timeline or research goals, it can be easy to lose sight of each other. *''Access to data''. Not all parties may have access to all data. A clear conversation at the beginning of the project is necessary to determine who will have access to what information. *Collaboration in research also means ''a shared responsibility for the integrity of the research.'' '"`UNIQ--references-00000566-QINU`"'  +
Different fields take different stances in regard to self-plagiarism. For example, legal research has a lot more tolerance for reuse of one's work than biomedical science. In 1969, the scientific journal the “New England Journal of Medicine” announced they would no longer publish already published work. This is called Ingelfinger rule and became a norm for high quality scientific journals. '"`UNIQ--ref-00000291-QINU`"'Because of the rise of preprint servers (such as arXiv), journals now tend to loosen that policy. Secondary publications are a different issue, as they clearly state that work has been previously published. They are produced with a goal of reaching a bigger (and sometimes different) audience, often through translations to different languages. Keep in mind that a lot of scientific journals use computer software to check if your text is similar to anything already published. The majority of software works through screening available online databases for similarities. '"`UNIQ--ref-00000292-QINU`"' '"`UNIQ--references-00000293-QINU`"'  +
The ECoC states that all partners involved in research take full responsibility for the overall integrity of the project. All partners are also expected to have agreed at the outset on the standards of research integrity that will be maintained. <sup>1</sup>  This can include all aspects of the research, from conception to publication, in order to prevent ambiguity at a later stage. The Montreal Statement on Research Integrity in Cross-Boundary Research Collaborations <sup>4</sup> states that all involved partners openly discuss their customary practices and expectations, including those of research integrity. While every individual is responsible fully for their own contribution, there should also be a collective responsibility for the integrity of the project. <sup>4</sup>  +
*A zero-tolerance culture towards putative breaches of research integrity. When institutions or individuals turn a blind eye to misbehaviour, they fail to foster a culture of research integrity. A zero-tolerance culture encourages people to report suspicions of malpractice. *A clear reporting system, including clear procedures and access to guidance and help (e.g. ombudspersons). A scheme has to be in place to handle expressions of concern and actual allegations of potential errors. *Protection of whistleblowers, clarity about the rights of both whistleblowers and persons who are accused.   +
Being a reviewer comes with the responsibility of fairly reviewing others. One way to promote fair processes is ''transparent ''peer review. For example, Nature, BMC and EMBO now publish peer review and editorial comments after a manuscript has been accepted for publishing, when both reviewers and authors agree on this.'"`UNIQ--ref-00000372-QINU`"' In the words of Nature: “in adopting transparent peer review, we are taking a step towards supporting increased openness, accountability and trust in the publishing process.”.'"`UNIQ--ref-00000373-QINU`"' Transparent peer reviewing is an example initiative to encourage fair reviewing and to appreciate the contribution of reviewers. Moreover, having a bullying and harassment policy in place sends a signal that bullying, including unfair reviewing, is inappropriate,'"`UNIQ--ref-00000374-QINU`"' thereby promoting good behaviour of scientists. Lastly, conflicts of interest should always be disclosed when professional or personal interests collide with the review process '"`UNIQ--references-00000375-QINU`"'  +
'''Data organization''' Data should be organized in a logical and structured way. Within research groups, consensus on naming and organizing data and files can help in structuring data. The University of Cambridge has provided [https://www.data.cam.ac.uk/data-management-guide/organising-your-data this resource] which provides a good oversight of what you should keep in mind for naming files, organizing folders and more. In addition, they collected various resources [https://www.data.cam.ac.uk/support/external that can support in data management]. '''Pseudonymization''' When performing research involving human subjects, participants should be pseudonymized or anonymized. Pseudonymization removes the information that would allow for identification of individuals from a dataset. The main researchers have access to an encryption key. When John Smith, aged 31 and Jane Doe, aged 25 are in your data set, you should not pseudonymize with ‘JS31’ and ‘JD25’. Correct pseudonymization is naming them, for example, participants 001 and 002. According to the GDPR, “‘personal data’ means any information relating to an identified or identifiable natural person (‘data subject’); an identifiable natural person is one who can be identified, directly or indirectly, in particular by reference to an identifier such as a name, an identification number, location data, an online identifier or to one or more factors specific to the physical, physiological, genetic, mental, economic, cultural or social identity of that natural person”. '"`UNIQ--ref-0000036F-QINU`"' <br /> '"`UNIQ--references-00000370-QINU`"'  +
'''Laboratory notes''' When performing experiments, researchers should carefully and comprehensively report their experimental design, materials and techniques used, and all of their outcomes. Here, reproducibility and transparency are paramount. This is important within the biomedical sciences, physics, chemistry, computer science, etc. Laboratory notebooks are important to use when performing experiments of any kind. The ten rules, as adapted below, can help in organizing and providing complete information in your notebook.'"`UNIQ--ref-00000390-QINU`"' #'''Learn your institution’s or laboratory’s notebook policy o'''r create one in your research team #'''Select the right medium for your lab notebook''' either electronically or a hardcopy. #'''Make the habit of keeping the lab notebook in your desk''' to keep your notebook at hand while working. #'''Record all scientific activities in your lab notebook''' including thoughts during meeting, theorizing about problems etc. #'''Every entry should be recorded with a date, subject and protocol''' to organize your lab journal or notebook. #'''Keep a record of how every result was produced''' to ensure reproducibility of your experiments. #'''Keep an overview of the different study protocols you use''' including adaptations from standard (laboratory) protocols #'''Keep a lab notebook that can serve as a legal record of your work''' to ensure you can take ownership of ideas, show you deserve authorship or protect intellectual property rights. #'''Create a table of contents in your lab notebook''' to eEnsure your notebook is organized and easily searchable. #'''Protect your lab notebook.''' Your lab notebook belongs to your institution since you are funded through your institution. “Your lab notebook is part of the scientific legacy of your laboratory. Therefore, you need to protect your lab notebook."'"`UNIQ--ref-00000391-QINU`"' '''Fieldnotes''' For qualitative researchers it is important to record: 1) descriptive information related to the data generation; and 2) to reflect on the process of data generation and interpretation. Both types of notes help in the interpretation and contextualization of findings.  Descriptive information should focus on observations related to the research problem. For the reflective content, the importance of note taking is to place into observations in the perspective of the researcher’s “personal, cultural and situational experiences”. Here, a critical attitude is important, with notes focusing on initial impressions, assumptions, concerns, and surprises. '"`UNIQ--ref-00000392-QINU`"' <br /> '"`UNIQ--references-00000393-QINU`"'  
VIGOR (Vioxx Gastrointestinal Outcomes Research) study is one of the most known cases where researchers ignored safety risks. The study aimed to examine if a new drug Vioxx, produced by drug maker Merck, would cause fewer gastrointestinal side effects for the treatment of the rheumatoid arthritis than naproxen. Over 8,000 patients participated in the study, half of them taking Vioxx, and the other half naproxen. As it turned out, the risk of serious heart problems and death was twice as higher for patients using Vioxx, than for those using naproxen, however, the study decided to ignore the risks and obscure the results.'"`UNIQ--ref-00000535-QINU`"''"`UNIQ--ref-00000536-QINU`"' Five years after Vioxx’s launch, Merck withdrew the drug from the market, but by that time it had already sold billions of dollars of the drug'"`UNIQ--ref-00000537-QINU`"'. Another study published in the medical journal Lancet estimates that 88,000 Americans had heart attacks from taking Vioxx, and 38,000 of them died.'"`UNIQ--ref-00000538-QINU`"' '"`UNIQ--references-00000539-QINU`"'  +
Observational studies, such as cohort or case – control studies, are sometimes overinterpreted in terms of cause-effect relationship. Correlation between a factor and an outcome does not necessarily mean causation. When it comes to experimental studies, sometimes randomization is not possible due to ethical reasons which should be taken in account when interpreting results of such studies. '"`UNIQ--ref-0000025F-QINU`"' Sometimes outcome measures do not correspond completely to questions asked in the study i.e. they are only indirectly connected. All of this is usually addressed in research methodology classes. When planning, doing and reporting research, you can always rely on appropriate EQUATOR reporting guidelines to make sure you have everything accounted for. '"`UNIQ--references-00000260-QINU`"'  +
There are several ways to deal with this questionable research practice. The first is disclosing the changes made to the research design. The second is preregistration of studies. '''Disclosing changes''' Deviating from the research design is allowed under certain instances. For instance, new sub-questions can surface when further progressing with the project. These can only be answered by performing extra analyses or different tests. In those cases, disclosing changes to the research design is considered a good practice. These analyses or changes should be presented as explorative, rather than final. '''Preregistration''' Preregistration is the process of submitting the research design before performing the study. Preregistration can be seen as an effective way to address researchers from not ‘luring’ them into changing methods to present results more spectacularly. Some journals also publish protocols and/or accept studies based in their design, proposed methods and relevance – and make a commitment to publish results. The outcome of the study is made of lesser importance, and the relevance of the study and rigor of the study design more important.  +
In a series of the Lancet on research waste, '"`UNIQ--ref-00000310-QINU`"' the following steps were suggested for setting research priorities and diminishing research waste (as cited from pg. 158): #Include objectives in research groups’ strategic plans and define the stakeholders whose opinions and priorities will be considered #Draw on an existing summary of previous priority-setting exercises in the specialty before undertaking own exercise #Use available methodological reviews of research priority setting as guidance about how to meet priority-setting objectives #Ensure that the priority-setting team has the necessary data, information about context, and skill set for their exercise #Pilot, assess, revise, and update the priority-setting exercise at intervals #Participate in discussions within the community of interest to share findings and experiences” '"`UNIQ--references-00000311-QINU`"'  +
CRISPR technology is supposed to be used to help individuals with major life-threatening diseases.'"`UNIQ--ref-00000962-QINU`"' Recently, a new device was developed and introduced in the Phase I study in patients with Type 1 diabetes mellitus, which contains a medium of beta cells developed from pluripotent stem cells.'"`UNIQ--ref-00000963-QINU`"' Other potential areas of use of this technology would be gene therapy in cancer treatment or personalized genetic medicine.  +
How to reform the incentive structure of science is a subject of ongoing research and debate. See, e.g., *Bornmann, L., & Williams, R. (2017). Can the journal impact factor be used as a criterion for the selection of junior researchers? A large-scale empirical study based on ResearcherID data. Journal of Informetrics, 11(3), 788–799. doi:[https://doi.org/10.1016/j.joi.2017.06.001 10.1016/j.joi.2017.06.001] *Krimsky, S. (2004). Science in the Private Interest: Has the Lure of Profits Corrupted Biomedical Research? Rowman & Littlefield. *Sandström, U., & Van den Besselaar, P. (2018). Funding, evaluation, and the performance of national research systems. Journal of Informetrics, 12(1), 365–384. doi:[https://doi.org/10.1016/j.joi.2018.01.007 10.1016/j.joi.2018.01.007]  +
Holm and Ploug suggest that researchers should address the following two questions: #If the results of your current (well planned and well conducted) project point in the opposite direction of the results of your previous research on this topic, would your first reaction be to reanalyse the data and reconsider your methods, or to reconsider your previous conclusions? #If your findings were the exact same as the opposing researchers in this field of research, would your policy recommendations be any different from the recommendations of the opposing group? '"`UNIQ--ref-0000027B-QINU`"' Four questions about polarized research: *'''Why does polarized research exist?''' Because researchers have different perspectives and interests. *'''Is polarized research fraud?''' No, because it is based on valid scientific methods. *'''How does polarized research occur?''' Researchers may use different definitions, indexes, end-points, models, statistical methods, interpretations etc making their results come out very differently. *'''How can we avoid polarized research?''' One suggestion is to force authors to declare “polarized conflict of interest” when submitting papers. Another is to make editors and publishers check for polarized conflicts of interest. '"`UNIQ--references-0000027C-QINU`"'  +
Lists of predatory publishers (blacklists) as well as lists of high quality open access publishers (whitelists) are of great value to researchers and decision makers. ===Blacklists=== The University of Colorado librarian Jeffrey Beall developed a list of potential predatory journals in 2008, which has been since taken offline because of certain flaws in the methodology. *[https://beallslist.weebly.com/ Beall's list] *[https://www2.cabells.com/blacklist Cabells' lists] *[https://predatoryjournals.com/ Stop Predatory Journals] ===Whitelists=== *[https://doaj.org/ Directory of Open Access Journals (DOAJ)] ===Choosing a journal=== Stefan Eriksson and Gert Helgesson have identified 25 signs of predatory publishing, '"`UNIQ--ref-000001F6-QINU`"' and argue that more points on the list that apply to the journal at hand, the more skeptical you should be." #The publisher is not a member of any recognized professional organization committed to best publishing practices (like COPE or EASE) #The journal is not indexed in well-established electronic databases (like MEDLINE or Web of Science) #The publisher claims to be a "leading publisher" even though it just got started #The journal and the publisher are unfamiliar to you and all your colleagues #The papers of the journal are of poor research quality, and may not be academic at all (for instance allowing for obvious pseudo-science) #There are fundamental errors in the titles and abstracts, or frequent and repeated typographical or factual errors throughout the published papers #The journal website is not professional #The journal website does not present an editorial board or gives insufficient detail on names and affiliations #The journal website does not reveal the journal's editorial office location or uses an incorrect address #The publishing schedule is not clearly stated #The journal title claims a national affiliation that does not match its location (such as "American Journal of ..." while being located on another continent) or includes "International" in its title while having a single-country editorial board #The journal mimics another journal title or the website of said journal #The journal provides an impact factor in spite of the fact that the journal is new (which means that the impact cannot yet be calculated) #The journal claims an unrealistically high impact based on spurious alternative impact factors (such as 7 for a bioethics journal, which is far beyond the top notation) #The journal website posts non-related or non-academic advertisements #The publisher of the journal has released an overwhelmingly large suite of new journals at one occasion or during a very short period of time #The editor in chief of the journal is editor in chief also for other journals with widely different focus #The journal includes articles (very far) outside its stated scope #The journal sends you an unsolicited invitation to submit an article for publication, while making it blatantly clear that the editor has absolutely no idea about your field of expertise #Emails from the journal editor are written in poor language, include exaggerated flattering (everyone is a leading profile in the field), and make contradictory claims (such as "You have to respond within 48 h" while later on saying "You may submit your manuscript whenever you find convenient") #The journal charges a submission or handling fee, instead of a publication fee (which means that you have to pay even if the paper is not accepted for publication) #The types of submission/publication fees and what they amount to are not clearly stated on the journal's website #The journal gives unrealistic promises regarding the speed of the peer review process (hinting that the journal's peer review process is minimal or non-existent)—or boasts an equally unrealistic track-record #The journal does not describe copyright agreements clearly or demands the copyright of the paper while claiming to be an open access journal #The journal displays no strategies for how to handle misconduct, conflicts of interest, or secure the archiving of articles when no longer in operation A number of other initiatives have also put together criteria for journal selection: *[https://thinkchecksubmit.org/ Guideline to choose the right journal for research] - *[https://guides.mclibrary.duke.edu/beinformed Be iNFORMEd: Checklist] - A checklist to assess the quality of a journal or publisher ==Other information== [http://www.wame.org/identifying-predatory-or-pseudo-journals The World Association of Medical Editors (WAME) statement on predatory publishing] [https://www.tandfonline.com/doi/full/10.1080/03007995.2019.1646535 The American Medical Writers Association (AMWA), European Medical Writers Association (EMWA), and International Society for Medical Publication Professionals (ISMPP) Joint Position Statement on Predatory Publishing] [http://www.icmje.org/news-and-editorials/fake_predatory_pseudo_journals_dec17.html ICMJE document on predatory publishing]<br /> '"`UNIQ--references-000001F7-QINU`"'  
==QRPs== According to research integrity experts who participated in a survey, '"`UNIQ--ref-00000239-QINU`"' there are a number of QRPs that occur frequently and have a high impact on science. In relation to study design, for instance, QRPs include presenting misleading information in a grant application or ignoring risks of unexpected findings or safety risks to study participants, workers or environment. Under data collection falls behaviour such as collecting more data when noticing that statistical significance is almost reached or keeping inadequate notes of the research process. in relation to reporting, examples of QRPs are hypothesizing after the results are known (HARKing), concealing results that contradict earlier findings, or not publishing a study with negative results. Moreover, selective citing to enhance your own findings or pleasing editors and colleagues is reported to often occur. QRPs that fall under collaboration are demanding or accepting authorship for which you do not qualify and reviewing your own papers. In addition, the misbehaviour that is estimated to occur the most and have a high impact on truth is insufficiently supervising junior coworkers. The misbehaviour that occurs the most and has the highest impact on trust is using published ideas of others without referencing. =='''Prevention'''== A way to counter QRPs could be to create awareness about research integrity issues and alter the current reward system. Instead of rewarding the number of publications, alternative aspects that could be rewarded include a researcher's commitment to pre-registration, data sharing and open science. '"`UNIQ--references-0000023A-QINU`"'  +
Since 2008, the American Food and Drug Administration (FDA) has required that results of all trials have to be posted within one year of their completion. This legislation, like others, does not work retroactively, which means that every treatment tested before 2008 does not have to have published results. Also, since the legislation came into action, no studies have been fined for noncompliance, and research has shown that 80% of clinical trials do not comply.'"`UNIQ--ref-00000082-QINU`"''"`UNIQ--ref-00000083-QINU`"' Major clinical trial registries (clinicaltrials.gov, eudraCT), have independent trials trackers, led by Data Lab from Oxford University. They collect a list of trials that have ended and whether or not they published their results. The Data Lab also collaborated with Goldacre on Open Trials. Its aim is to collect everything related to clinical trials in one place, including their registration, data, reports, publications and researchers. '"`UNIQ--references-00000084-QINU`"'  +
One of the techniques for detecting the fabrication of numbers is to check the “rightmost digits” of the collected data. The “rightmost digit” is the digit that a number ends in. It is considered to be “the most random digit of a number,” which means that that the numbers that make up a data set should be uniformly distributed as in a lottery '"`UNIQ--ref-00000430-QINU`"''"`UNIQ--ref-00000431-QINU`"'. Since the rightmost digits in each study should be unpredictable, the appearance of any patterns is a reason to suspect data fabrication'"`UNIQ--ref-00000432-QINU`"' '"`UNIQ--ref-00000433-QINU`"''"`UNIQ--ref-00000434-QINU`"''"`UNIQ--ref-00000435-QINU`"'. Research conducted by Mosimann et al. in 1995 showed that most people cannot generate random numbers when fabricating data, which makes it possible to detect potentially fabricated data '"`UNIQ--ref-00000436-QINU`"'. They also developed a program called the “chi-square test for uniformity of the digit distributions”, which measures the production of random digits '"`UNIQ--ref-00000437-QINU`"'. If the distribution of numbers is not uniform, the numbers are falsified '"`UNIQ--ref-00000438-QINU`"''"`UNIQ--ref-00000439-QINU`"''"`UNIQ--ref-0000043A-QINU`"'. There are other methods that can be used to detect the fabrication of rightmost digits. For example, some journals have adopted a policy of statistical review for all papers containing numerical data '"`UNIQ--ref-0000043B-QINU`"' '"`UNIQ--ref-0000043C-QINU`"'. In addition, published graph data can be compared with “raw” notebook or computer data to determine whether the numbers have been reported correctly '"`UNIQ--ref-0000043D-QINU`"''"`UNIQ--ref-0000043E-QINU`"'. Authors should present the raw data that supports their findings, while journals, universities and granting agencies should promote this practice '"`UNIQ--ref-0000043F-QINU`"' '"`UNIQ--ref-00000440-QINU`"'. Some argue that the use of statistical methods will significantly reduce fabrication of numerical data '"`UNIQ--ref-00000441-QINU`"'. '"`UNIQ--references-00000442-QINU`"'  
The best practice is to preregister study protocols online in a registry. When describing their study designs, researchers should be as transparent and complete as possible. To date, only two reliable animal registries are available:[https://preclinicaltrials.eu/ Preclinicaltrials.eu] and the [https://www.animalstudyregistry.org/asr_web/index.action Animal Study Registry]. It is also possible to use general registries, e.g. the Open Science Framework, however the registration forms will not be tailored to animal studies specifically. If a study could not be preregistered, it is still worthwhile to register its protocol at a later stage; especially if the study could not be published. Although, prospective registration (i.e., registration before the experiments) should be encouraged.  +
In 2004, the International Committee of Medical Journal Editors (ICMJE) announced that clinical trials beginning after July 1, 2005, would be under a new trial registration policy. '"`UNIQ--ref-00000093-QINU`"' To be published in member journals, the trials would have to be registered in an approved trial registry prior to the enrollment of the first participant. Since 2005, ICMJE has reiterated that registering a prospective study should be a condition of publication and after the announcement, several journals endorsed this policy. The registration must occur prior to enrollment of the first study participant in a trial registry that meets the quality criteria developed by WHO. '"`UNIQ--ref-00000094-QINU`"' However, the adherence for this requirement remains low by both researchers and journal editors and, unfortunately, not all clinical trials are registered before they start. Recent findings suggest that among the reasons that lead to the low adherence to the new requirement by the researchers are: lack of awareness of the criteria; misunderstandings regarding the definition of clinical trial by ICMJE; and difficulties for registration. '"`UNIQ--ref-00000095-QINU`"' On the part of journal editors, the main reason is that not all journals are equally committed to meeting the registration requirements, strengths, and limitations of the study. The Committee on Publication Ethics suggested that “it is probably best to judge each paper on a case by case basis.” '"`UNIQ--references-00000096-QINU`"'  +
The most relevant examples are studies needed for drug approval. The approval procedure usually requires a series of clinical trials divided into three phases. Phase I and II can involve model building according to European Medicines Agency (EMA), however phase III trial is always designed as a confirmatory trial'"`UNIQ--ref-000003C7-QINU`"'. Both the EMA and the Food and Drug Administration (FDA) require statistical pre-registration before the beginning of a trial. Exploratory trials aim to produce evidence of effectiveness of new drugs. These results then lead to confirmatory trials'"`UNIQ--ref-000003C8-QINU`"'. The need for pre-registrations of trials is demonstrated by the following case. In 2004, the New York attorney general’s office filed a lawsuit against pharmaceutical company GlaxoSmithKline. Four unpublished clinical trials showed evidence that use of the antidepressant Paxil increases the risk of suicidal tendencies amongst young people. This lawsuit helped raise awareness that studies need to be pre-registered'"`UNIQ--ref-000003C9-QINU`"'. That year the International Committee of Medical Journal Editors required pre-registration of clinical trials'"`UNIQ--ref-000003CA-QINU`"'. US laws also require clinical trials to be pre-registered'"`UNIQ--ref-000003CB-QINU`"'. Journals and research funders support pre-registration and some organizations promote it as an important step towards openness and transparency in research'"`UNIQ--ref-000003CC-QINU`"'. There have been some other interesting efforts that promote pre-registration, such as the Preregistration Challenge, sponsored by the Center for Open Science. This campaign offered $1,000 awards to researchers who pre-register their studies and publish their results within a deadline'"`UNIQ--ref-000003CD-QINU`"''"`UNIQ--ref-000003CE-QINU`"''"`UNIQ--ref-000003CF-QINU`"'. These initiatives have led to a certain “cultural shift” - there are now more than 8,000 pre-registrations on Open Science Framework for research in different disciplines'"`UNIQ--ref-000003D0-QINU`"'. APS journals also began to practice pre-registration in 2014. From 2014 to 2019, 43 of 154 articles published in Psychological Science earned “Preregistered badge” due to pre-registration of design and analysis plan of their studies'"`UNIQ--ref-000003D1-QINU`"'. Although pre-registration has demonstrated benefits for the trustworthiness of research, the practice still needs to be widely adopted across the scientific community. '"`UNIQ--references-000003D2-QINU`"'  
In 2012, ORCID launched their Registry as a result of which researchers could be assigned unique identifiers, a 16-character code compiled of numbers 0-9, and thus distinguish themselves from other researchers. In 2019, there are more than 7 million ORCID accounts'"`UNIQ--ref-0000032E-QINU`"' registered to individual researchers, universities, scientific publishers and commercial companies. '"`UNIQ--ref-0000032F-QINU`"' Increasingly, funding organisations are requiring that their applicants provide their ORCID identifier. '"`UNIQ--references-00000330-QINU`"'  +
One of the best examples of the application of digital tools within the humanities is the collaborative, interdisciplinary research project [http://republicofletters.stanford.edu/index.html Mapping the Republic of Letters], developed by Stanford University in 2010 and funded by the National Endowment for the Humanities (NEH). The aim of the project is to map the 17<sup>th</sup> and 18<sup>th</sup> century correspondence of prominent and influential intellectuals in the Age of Enlightenment '"`UNIQ--ref-000003E2-QINU`"'. The “Republic of Letters” was a self-proclaimed community of scholars that exchanged their ideas via handwritten letters across Europe and the Americas. The researchers on the project used metadata to produce maps, charts and other visual tools '"`UNIQ--ref-000003E3-QINU`"'. These modern visualization tools provide a greater understanding of distribution of the letters over hundreds of years and help identify geographic “hot-spots” in the archive '"`UNIQ--ref-000003E4-QINU`"'. They shed light on, for example, Voltaire’s correspondence, which consists of about 15.000 letters. The visualization of the letter exchanges on a map shows the places where Voltaire traveled and reveals patterns in his writing at specific times and in specific places '"`UNIQ--ref-000003E5-QINU`"'. These maps of correspondence raise new questions and facilitate new interpretations of the letters and related documents '"`UNIQ--ref-000003E6-QINU`"'. The project also provides a basis for further research not only concerning the Republic of letters, but also in related topics. The use of digital tools in the humanities has seen the formation of organizations that foster research in the digital humanities. One of them is the European Association for Digital Humanities (EADH), established in 1973 under the name of the Association for Literary and Linguistic Computing '"`UNIQ--ref-000003E7-QINU`"'. This organization is one of the constituent organizations in the Alliance of Digital Humanities (ADHO), formed in 2005, which supports and promotes digital research and education in all the arts and humanities disciplines '"`UNIQ--ref-000003E8-QINU`"'. In addition, numerous universities now offer undergraduate and graduate courses and programs in the digital humanities '"`UNIQ--ref-000003E9-QINU`"'. '"`UNIQ--references-000003EA-QINU`"'  
The main options analysed are the establishment of permanent European bodies to support institutions in investigating, overviewing or advising on research misconduct investigations. A European body to carry out investigations on behalf of institutions would ensure that investigations are carried consistently, reduce the risk of conflicts of interest, allow expertise to develop, and professionalize the handling of cases. It would be particularly helpful for institutions that still do not have any structures or experience in handling research misconduct allegations. Some obstacles would have to be overcome for such a body to be effective. Institutions might be reluctant to expose internal problems for fear of damaging their reputation and losing their autonomy; national regulations might limit access to data; and some counties might not recognize its legitimacy. A further option would be to set up an oversight body that would not conduct investigations but only review investigations carried out by institutions to make sure that they have followed appropriate procedures, previously agreed on internationally. This might motivate institutions to follow those procedures, and so it would bring more homogeneity in the handling of allegations across Europe. As well, an external check would help control and lower risks of conflict of interest. On the other hand, depending on its status, it might not be able to require an institution to redo a poorly conducted investigation, and if it did, this would require more resources for each investigation. Another role that a European body could have is advisory. It could advise institutions on how to create structures and policies to prevent research misconduct and protect integrity, and it could even set up a database of experts to assist investigations committees. The main concern about such a body is that it might be appear redundant or in conflict with existing national advisory bodies.  +
Experts in their respective fields and organizations who are in charge of creating clinical practice guidelines should be aware of discrepancies that may arise if the grading system is not well defined. Ratings of quality of evidence should be transparent and based on detailed and clear criteria, so it can be used by clinicians and patients. However, it can't be expected of clinicians or patients to comprehend a variety of grading systems. A simple, transparent grading of the recommendation, such as the GRADE system, is an example of a good solution. It's the system that provides their users to assess the judgments behind recommendations regarding health care.  +
The Catholic University of Leuven (KU Leuven) has a dedicated webpage on image integrity. They identified some of the most important sources and tools on the subject (available [https://www.kuleuven.be/english/research/integrity/practices/image-processing here], accessed on 24-04-2020). As their page is brief, a more elaborate description of what it contains, and additional sources, follows below. Rossner & Yamada (2004)'"`UNIQ--ref-000002E1-QINU`"' wrote a prominent article arguing for a standard for image integrity. Working as Editors for The Journals of Cell Biology, they noticed the discrepancies between guidelines on image integrity journals gave to their authors (if any). To have a comprehensive overview, they developed their own guidelines for the Journal of Cell biology. They write that, for every aspect of the guideline, the main question is: “Is the image that results from this adjustment still an accurate representation of the original data?”'"`UNIQ--ref-000002E2-QINU`"' (p. 5). Whenever the answer is ‘no’, researchers should provide a detailed description of the adjustments, its purpose and the original image on request. If not, their actions might be regarded as misconduct. A step-by-step translation of the guideline is available on the website of American Journal Experts (access [https://www.aje.com/en/arc/avoiding-image-fraud-7-rules-editing-images/, here], accessed on 24-04-2020) and on the KU Leuven webpage. A similar guideline, and additional editorials on the subject, are given by the journal Nature on their editorial policies page (available [https://www.nature.com/nature-research/editorial-policies/image-integrity here], accessed on 24-04-2020). The Center for Ethics and Values in the Sciences, of the University of Alabama in Birmingham, created a website for both students and researchers with much material regarding image integrity (available [https://ori.hhs.gov/education/products/RIandImages/default.html here], accessed on 24-04-2020). They provide guidelines with more in depth explanations and illustration videos, but also educational material such as case studies, discussion hand outs and a quiz. The Office of Research Integrity provides a tutorial on how to use ‘action sets’ in photoshop (available [https://ori.hhs.gov/actions here], accessed on 24-04-2020). These actions sets allow you to document the changes you make to an image and ‘slide’ (i.e. going back and forward) between all the steps you made. The process of the image you manipulated will hereby be completely transparent if you provide the ‘action set’ combine with a copy of the original image. For those reviewing papers, a free open source program, called InspectJ, is available on GitHub to identify cloning, stitching, patching and erased objects within an image. An advanced version also provides histogram equalization and gamma correction for improved image inspections (both available [https://github.com/ZMBH-Imaging-Facility/InspectJ here], accessed on 24-04-2020) '"`UNIQ--references-000002E3-QINU`"'  
There should be an open dialogue about research practices between all levels of staff at an institution. The guidelines themselves, as a written set of rules should be easily accessible. All procedures should have a level of transparency and there should be secure channels of contact in case of concern about certain practices being planned out or implemented.  +
The [https://www.embassy.science/resources/the-european-code-of-conduct-for-research-integrity European Code of Conduct] states that fairness and integrity are most important for procedures for investigating misconduct, principles to be followed are also stated.'"`UNIQ--ref-00000073-QINU`"' '"`UNIQ--references-00000074-QINU`"'  +
This is still a novel area of research. Official advice and policies regarding the prevention of mental health problems in academia is lacking. However, previous research has established connections between organizational climate and health '"`UNIQ--ref-0000031F-QINU`"'. Suggested actions to combat the rise of mental health problems in academia include raising awareness, creating more dialogue, providing training on mental health and emotional wellness, effective mentoring practices, monitoring mental health via anonymous surveys, and providing free counseling sessions for those marked as symptomatic or at high-risk. What should also be considered is the need to establish official policies that reward researchers not just for their scientific output and ability to obtain funding, but also for their educational, mentoring and “wellness” practices. '"`UNIQ--ref-00000320-QINU`"' [https://embassy.science/wiki-wiki/index.php/Resource:D3784352-c18f-4c40-b862-d9ee2afabb0a This guide] was developed during the COVID pandemic by the department of Experimental Immunology of Amsterdam UMC and is implemented by this department to talk about stress with their PhD-candidates. '"`UNIQ--references-00000321-QINU`"'  +
When submitting the final, written output of their research, researchers can publish it in an Open Access Journal. The [https://doaj.org/ DOAJ] indexes more than 13k of open access, high quality and peer-reviewed journals. Given that only a small portion of these open access journals require payment of an Article Processing Charge (APC),'"`UNIQ--ref-0000034D-QINU`"' researchers can choose from a variety of journals.  At the same time, researchers can post a preprint of their article to a preprint server (a list of preprint servers, organised by discipline is available [https://osf.io/preprints/ here]). Research data can also be stored online in a research data repository. For an extensive list of repositories researchers can check [https://www.re3data.org/ Registry of Research Data Repositories] and [http://databib.org/ Databib]. [https://zenodo.org Zenodo] is among the well-known repositories that allows researchers to archive various digital artefacts such as data sets, research software, reports, posters. '"`UNIQ--references-0000034E-QINU`"'  +
The main principle of Plan S states that all research funded from public or private grants must be openly accessible when published. There are, in addition, ten sub-principals: #Authors should have copyright of their publications, which should be made available under a Creative Commons Attribution license; #Robust criteria for evaluation in high-quality open access journals, platforms and repositories should be developed; #Funders should provide incentives to establish and support open access journals where there aren’t any; #Funders should cover the cost of publication fees; #Funders should support the diversity of business models for open access journals and platforms; #Funders should ensure transparency by supporting alignment of strategies, policies and practices; #Monographs and book chapters should have a longer process of achieving open access; #Hybrid models of publishing should be only be a means of transforming to full open access; #Funders should monitor compliance; #Research outputs should be assessed on the basis of their internal value, and not their scientometric characteristics. '"`UNIQ--ref-00000333-QINU`"' In September 2018, 11 national research funding organizations (from Austria, France, Ireland, Italy, Luxembourg, Netherlands, Norway, Poland, Slovenia, Sweden and United Kingdom) signed a commitment to implement all that is necessary for the Plan S mission by 1<sup>st</sup> January 2020. '"`UNIQ--references-00000334-QINU`"'  +
The Qualification portfolio, implemented by Utrecht UMC. To be described in further detail elsewhere on The Embassy.  +
Research integrity advisors are experienced researchers with in-depth knowledge of research integrity and research ethics. They are appointed by the university to serve the complex role of dealing with all sort of questions related to research integrity practices, procedures, and issues. For example, in Australia, universities have established research integrity advisors’ teams to assist researchers and research students in conducting research with integrity and advise them on questions that may arise during the research process. If you are not sure who to talk with, the universities web pages contain lists of RIAs and guidance on when to approach to an advisor. '"`UNIQ--ref-00000097-QINU`"' At Melbourne University, RIAs also have a responsibility to report alleged cases of research misconduct to authorized bodies. '"`UNIQ--ref-00000098-QINU`"' In Europe, for example, in Denmark, some Danish research institutions (e.g., Aarhus University) have special advisors for supporting the good scientific practice. '"`UNIQ--ref-00000099-QINU`"' Moreover, LARI (Luxembourg Agency for Research Integrity) provides research ethics consultations to researchers of all levels. While LARI advisors are not officially called RI advisors, they still have a similar role. '"`UNIQ--ref-0000009A-QINU`"' '"`UNIQ--references-0000009B-QINU`"'  +
The organizational structures of RI committees and their responsibilities regarding cases of research misconduct may vary. In some countries, RI committees (or commissions) are established at the national level, hence their responsibility is to handle cases of research misconduct, or serve as an advisory body, for all research institutions within state borders (e.g. National Commission for Research Integrity-Luxembourg, Finnish National Board on Research Integrity, Danish Committee on Research Misconduct (DCRM), Commission for Research Integrity-Austria, French Office for Scientific Integrity, Netherlands Board on Research Integrity). For example, the Danish law on research misconduct stipulates the responsibility of the DCRM to handle the cases of research misconduct, while each institution has a responsibility to process cases of questionable research practices. Some RI committees are established as a part of research integrity organisations, providing training and other educational activities for researchers (e.g. the Luxembourg Agency for Research Integrity, the Austrian Agency for Research Integrity). In some countries, dealing with cases of research misconduct is the responsibility of research institutions and institution-based committees as there is no national body to handle investigations and process cases of misconduct. An example of the latter is Sweden, where each research institution is responsible for conducting an investigation of research misconduct and to impose a sanction. All these RI bodies, both at the national and institutional level, are doing important work in the field of research integrity promotion and guiding researchers with the principles of good scientific practices. There are numerous documents, issued by RI bodies and committees in the form of guidelines and checklist, as well as documents describing committees’ procedures when dealing with misconduct allegations. Some European examples are: Guidelines for Good Scientific Practice by the Austrian Agency for Research Integrity, FNR Research Integrity Guidelines, Guidelines for the Investigation of Misconduct (by the Irish National Forum), Roadmap for Scientific Integrity 2020 (OFIS), Integrity and responsibility in research practices (CNRS-CPU), Scientific integrity guideline(CNRS), TENK Guidelines.  
ENERI has recently published an insightful policy brief on what makes a research ethics and research integrity expert. Based on a participatory research design culminating in a series of consensus conferences with 50 stakeholders from various positions within or close to academia, ENERI has found the following skills to be particularly useful for REC members: '''Hard skills''' *comprehensive knowledge of relevant guidelines, regulations, and laws *experience with ethical assessments or academic qualifications in relevant disciplines, like philosophy or law *research experience *legal expertise *analytical skills *the ability to think critically '''Soft skills''' *Communicative skills *interpersonal skills *attention to detail *the ability to manage and resolve conflicts *the ability to work collaboratively '''Process skills''' *administrative and management skills *decision-making skills *the ability to transform abstract theoretical ideas into practical recommendations '''Emotional skills''' *open mindedness *independence *awareness of social norms and the likely consequences of breaching them *personal commitment According to ENERI, RE experts individually inevitably need hard skills, but do not necessarily have to possess all soft skills, process skills, and emotional skills. However, all soft skills, process skills, and emotional skills should be present on the institutional level in RECs which, therefore, should have a diverse membership with complementary skills. The role of the chair role is particularly crucial. The chair needs to have broad soft skills, process skills, and emotional skills to guarantee that all represented perspectives are included in assessment, review, and advice procedures. Hence, chairpersons need more skills than ordinary board members due to the pivotal position they occupy in organizing inclusive deliberations.  +
Several documents and declarations have been developed in relation to ethical research committees. The European Network of Research Ethics Committees - EUREC is a network that brings together existing national Research Ethics Committees, networks or comparable initiatives on the level of European Union. RECs can be established for each academic institution and/or universities. In the United States, Institutional Review boards (IRBs) exist in both academic and state institutions.  +
The details of an RIO's job vary from country to country, but the position is mandatory in many. In the United States, any institution that receives Public Health Service funding reports to the Office of Research Integrity (ORI) at the Department of Health and Human Services. A RIO serves as the liaison between the ORI and their institution. By law, they ensure that the institution has policies and procedures for investigations and reports these to the ORI.'"`UNIQ--ref-000000A1-QINU`"' They also contribute to investigations that lead to retractions, expulsions, and (sometimes) arrests. In the European Union, each country has slightly different requirements and roles for their RIOs, but their task is essentially the same. The European Network of Research Integrity Officers serves as the expert agency in the EU, assisting RIOs with advice and guidance. With the increasing pace of scientific publications, an RIO's job is more important than ever. They serve an essential role in the scientific community. They protect individual researchers from accidental missteps. They protect the public from poor, fraudulent, and fabricated science. They protect the whole scientific community by building public trust. An RIO serves on the front lines of scientific integrity. They're present to guide researchers and foster trust in institutions. RIOs exist to protect science and are a resource for researchers who need guidance or help with misconduct questions. '"`UNIQ--references-000000A2-QINU`"'  +
The ‘Research; Increasing value, reducing waste’ project, led by The Lancet medical journal, provides an excellent example of an RRI approach. This project aims to address deficiencies in the medical research system that reduce the value of research and often result in significant financial loss caused by inadequate research agendas, flawed research designs, not publishing negative results, and poorly reporting findings. In order to increase the value of research and reduce waste, the project adopted four RRI process requirements: diversity and inclusiveness, transparency and openness, anticipation and reflection, as well as responsiveness and adaptation to change. Inclusion of patients and medical caregivers in setting the right research agenda is recommended to increase diversity in the research process. The project proposes that research should be more transparent and open, and supports a full and public documentation of the research process. The project also highlights a need to discuss current practices that lead to wasted effort. Finally, a series of five papers published in The Lancet offers 17 recommendations that outline the changes that should be made to current structures and systems'"`UNIQ--ref-000003B2-QINU`"'. RRI is not just about better science from a scientist’s point of view; it is a continuous effort to talk to diverse societal actors and involve them in the research process, through meaningful conversations and contributions beyond “just” being a participant'"`UNIQ--ref-000003B3-QINU`"'. Various activities for bringing more awareness to research processes, such as science cafés or open lab days, are just a part of the framework'"`UNIQ--ref-000003B4-QINU`"'. Collaboration with small enterprises and social innovators, as well as citizen scientists, is also a crucial part of RRI. It involves the improvement of science and society through mutual sharing of expertise and experiences. '"`UNIQ--references-000003B5-QINU`"'  +
Different types of scientific policy may be adopted. Sometimes investment in basic research is preferred. In these cases the expectation is that some kind of breakthrough will result in a vast array of new technologies which will then be commercialized and pay back the investments. Other times the focus may be on technology development, and more support for engineering than basic science. The most extreme examples of such science policies are the Manhattan project'"`UNIQ--ref-00000137-QINU`"' and the Space projects pursued by the US and the Soviet Union in the second half of the 20th century. '"`UNIQ--references-00000138-QINU`"'  +
In these situations, history teachers are mediators between different and sometimes conflicting collective memories.'"`UNIQ--ref-000005AC-QINU`"' Teaching topics such as the civil war in Northern Ireland, where everyday life reminds its population about their divisions due to past and present conflicts is particularly difficult for history teachers who teach in that area.'"`UNIQ--ref-000005AD-QINU`"' According to recent findings, many teachers feel uncertain and underprepared when teaching controversial and sensitive issues because of the fear of the emotional reaction in the classroom, perception of pressures from school, parents, local community or state or even because of their own beliefs and identities.'"`UNIQ--ref-000005AE-QINU`"''"`UNIQ--ref-000005AF-QINU`"' '"`UNIQ--ref-000005B0-QINU`"'This is why some European universities offer courses on teaching controversial and sensitive issues in history education with aim of preparing future teachers for these challenges.'"`UNIQ--ref-000005B1-QINU`"' Providing students with balanced academic approach of these issues'"`UNIQ--ref-000005B2-QINU`"' is necessary to help them understand that almost every historical topic is open to different interpretations,'"`UNIQ--ref-000005B3-QINU`"' particularly when teaching these issues in societies with opposite narratives. That is an opportunity for a multi-perspective approach,'"`UNIQ--ref-000005B4-QINU`"' but also for developing students’ ability to deal with controversial issues and debating with people who do not share their opinion. Main strategies teachers can use when dealing with these issues in the classroom are: -distancing strategy (when an issue is highly sensitive in the community where the teacher is teaching or when the class is polarized. This strategy proposes examining analogies and parallels or going back further in time to trail the history of the issue that is being discussed). -compensatory strategy (when students are expressing attitudes based on ignorance, when the minority is being bullied or discriminated against by the majority or when there is consensus in the class in favor of one particular interpretation. In these cases, teachers can play the devil’s advocate, highlight contradictions in students’ responses or demythologize popular beliefs). -empathetic strategy (when the issue involves a group or nation which is unpopular with the students, when the issue involves latent discrimination against some group or where the issue is distant from the students’ own lives. Teachers can use several methods, such as role reversals, for-and-against lists, role play and simulations and also vicarious experience through examining films, novels or documentaries). -exploratory strategies (when the issue is not clearly defined or where the teacher’s aim is also to use the issue as a tool to develop analytical skills. In such conditions, students can explore people’s diaries and memoirs or conduct oral history).'"`UNIQ--ref-000005B5-QINU`"' '"`UNIQ--references-000005B6-QINU`"'  
Researchers that work with personal data can consult the GDPR online [https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:32016R0679 here]. In 2020 the European Data Protection Supervisor issued [https://edps.europa.eu/sites/edp/files/publication/20-01-06_opinion_research_en.pdf A Preliminary Opinion on data protection and scientific research]. You should also be able to contact your local Data Protection Officer or study supervisor for more information on handling personal data.  +
Whistleblower protections are an important element in an institution's ethics code, describing procedures to deal with allegations and violations of misconduct. There is general agreement within the scientific community that reporting misconduct is essential in the prevention and management of misconduct and that whistleblowers should be provided adequate safeguards. Whistleblower protections also support a culture of scientific integrity within an institution. However, policies on researchers’ duties to report and the consequent protections differ significantly by institution and country. [https://www.embassy.science/resources/the-european-code-of-conduct-for-research-integrity The European Code of Conduct for Research Integrity] '"`UNIQ--ref-00000077-QINU`"'contains the following guidance in the section “Dealing with Violations and Allegations of Misconduct”: "National or institutional guidelines differ as to how violations of good research practice or allegations of misconduct are handled in different countries. However, it always is in the interest of society and the research community that violations are handled in a consistent and transparent fashion. The following principles need to be incorporated into any investigation process.[…] Procedures are conducted confidentially in order to protect those involved in the investigation. Institutions protect the rights of ‘whistleblowers’ during investigations and ensure that their career prospects are not endangered." In the UK it seems that universities develop a specific whistleblowing policy for different misbehaviours (grievance, bullying and harassment, discipline, research misconduct). As an example, we refer to the [https://le.ac.uk/~/media/uol/docs/about-us/policies/whistleblowing-policy-updated-may-2017.pdf document of the University of Leicester],'"`UNIQ--ref-00000078-QINU`"' which guarantees confidentiality for whistleblowers: "The University will treat disclosures of information made under this Policy in a confidential and sensitive manner. The identity of individuals making allegations may be kept confidential, if requested by the individual(s) concerned, so long as it does not hinder or frustrate any investigation. In this event, the University will consult the individual before it takes any further action which might break the initial confidentiality. It should be recognised, however, that the investigation process may, of necessity, reveal the source of the information and, as part of the investigation, an individual making a disclosure may need to provide a statement as part of the evidence required." In the USA, whistleblowers have well established legal protection. The Whistleblower Protection Enhancement Act of 2012 strengthened protection for federal employees who blow the whistle on waste, fraud, and abuse in government operations. '"`UNIQ--references-00000079-QINU`"'  
There are different online companies offering altmetrics services. Some of them are Altmetric, Impactstory, and Plum Analytics.'"`UNIQ--ref-00000124-QINU`"' They can track HTML views and PDF downloads, shared articles on social media platforms, saved and cited items. Altmetrics scores are often indicators of how popular an article is online with the general public. Unlike typical research metrics, Altmetrics software enables the user to track the dissemination of publications in real time. Some publishers have started offering their readers this information (BioMed Central, PLOS, Nature, Elsevier). Some argue that this form of metric is not a good indicator of popularity or quality, as social media activity and time of publication can have a big influence on the metric. '"`UNIQ--ref-00000125-QINU`"' There seems to be no correlation between citations and altmetrics. '"`UNIQ--references-00000126-QINU`"'  +
The website [http://www.eigenfactor.org/ www.eigenfactor.org] reports measures for publications indexed by JCR as well as journals, books, newspapers, and other reference items that are referred to by these publications.'"`UNIQ--ref-00000519-QINU`"' The number of journals has increased each year. In 1997 the website listed 6,439 journals, whereas in 2014 it measured influence of 11,200 journals.'"`UNIQ--ref-0000051A-QINU`"' Two principle scores are calculated: Eigenfactor score and Article Influence score. Eigenfactor scores are scaled so that the scores of all journals listed in JCR sum up to 100. If a journal has an Eigenfactor score of 1.0, it has 1% of the total influence of all indexed publications. In 2014, the journal PLoS One had the highest Eigenfactor score, with a value of 1.533.'"`UNIQ--ref-0000051B-QINU`"' Since larger journals will have more citations, they will subsequently have larger Eigenfactor scores.'"`UNIQ--ref-0000051C-QINU`"''"`UNIQ--ref-0000051D-QINU`"''"`UNIQ--ref-0000051E-QINU`"' However, the most prestigious journals are not necessarily the largest, but the ones that receive the most citations per article. With regards to that, the Article Influence score measures the influence of a journal per article. It is calculated as a journal’s Eigenfactor Score divided by the number of articles in that journal and normalized so that the average article in the JCR has an Article Influence score of 1.'"`UNIQ--ref-0000051F-QINU`"' Therefore, if an Article Influence score of a journal is 3.0, then the articles of that journals are on average three times as influential as the average article in JCR.'"`UNIQ--ref-00000520-QINU`"' In 2014, the journal CA-A Cancer Journal for Clinicians had the highest Article Influence score, with value of 3.95.'"`UNIQ--ref-00000521-QINU`"' '"`UNIQ--references-00000522-QINU`"'  +
Advance data mining techniques can help identify impact factor manipulation. See [https://link.springer.com/article/10.1007%2Fs11192-016-2144-6 this article].  +
A study analyzed 61 neuroscience journals from 2003 to 2011. The aim was to find out whether there was an increase in publication delay over one decade and whether this phenomenon can increase IFs. The study showed that while for most journals in 2003 the phenomenon of online-to-print lag did not exist, about 50% of the studied journals from 2011 had online-to-print lag greater than 3 months. The lag varied between journals ranging from 0 to 19 months.'"`UNIQ--ref-00000688-QINU`"' There was an increase of lags over one decade, which subsequently raised the journals IF. Moreover, the larger the online-to-print lag, the higher the increase of IF.'"`UNIQ--ref-00000689-QINU`"' This is why some researchers suggested that the date of the online publication should be used to calculate the IF and not the date of the print publication.'"`UNIQ--ref-0000068A-QINU`"''"`UNIQ--ref-0000068B-QINU`"' In the fall of 2020 Clarivate Analytics announced that it would make this shift.'"`UNIQ--ref-0000068C-QINU`"' This change will help reduce ambiguity and contribute to more transparent calculation of citation metrics.'"`UNIQ--ref-0000068D-QINU`"' The 2021 release using 2020 data is planned to be the transition year and the full switch will begin in 2022 using 2021 publication data.'"`UNIQ--ref-0000068E-QINU`"'  +
An article analyzed bibliometric indicators for nuclear medicine journal. By comparison of Scopus and ISI scientists realised that seven nucelar medicine journals were indexed only on Scopus. By analysing these journals as they were part of ISI database potential IF of those journals was calculated and it put them in 11<sup>th</sup>, 14<sup>th</sup> i 15<sup>th</sup> place of nuclear medicine journal list. This result leads to conclusion that Scopus indexed journals shouldn't be overlooked when conducting quality assesment '"`UNIQ--ref-0000097F-QINU`"'. Another research among ISI and Scopus based on pediatric neurology journals showed that 3 journals were Scopus indexed only. Once again potential IF of three jorunals was calculated and it ranked them 12<sup>th</sup>, 13<sup>th</sup> and 14<sup>th</sup> among pediatric neurology journals. Self-citation doesn't affect SJR, but when it comes to IF self-citation has a great effect on it. When it comes to quality assesment of journal one should be aware of potential errors of IF and get familiar with new bibliometric indicators (such as ES, SJR) for best results '"`UNIQ--ref-00000980-QINU`"'. '"`UNIQ--references-00000981-QINU`"'  +
'''Detection''' Educators are increasingly checking student essays and theses using plagiarism check software. Publishers can also use similar software to undertake similarity checks on submitted manuscripts against published work and archived submissions. Springer has dedicated a [https://www.springer.com/gp/authors-editors/editors/plagiarism-prevention-with-crosscheck/4238 page] on the prevention of plagiarism. There are, however, currently no known initiatives for funders to check plagiarism in funding applications.  +
The use of card exchange games is an approach used in teaching the philosophy of science. It was developed by Bergquist and Phillips in 1975 and later popularized by Cobern. '"`UNIQ--ref-0000008F-QINU`"' The idea of card games is to foster dialogue between participants about statements written on cards, and such games have been effective in improving students’ knowledge. In the peer review card exchange game, six different domains of peer review are explored by different statements written on cards. Participants can agree or disagree with the statements, but they are asked to discuss them and reach a consensus as a group. The explored domains are: responsiveness, competence, impartiality, confidentiality, constructive criticism and responsibility to science. Participants have to find which cards they all agree on. After that, they participate in a moderated discussion. '"`UNIQ--references-00000090-QINU`"'  +
The regulation of research integrity training for PhD students varies among countries. Some countries oblige RI training at postgraduate level in their national codes, like Denmark in the Danish code of conduct for research integrity. '"`UNIQ--ref-000000A7-QINU`"' This code states that research integrity training must be provided by higher education institutions. Similarly, in France the Ministry of Education declared that all PhD students must be trained in research integrity and research ethics before defending their thesis. In some countries, training is provided by both universities and independent research integrity institutions. An example of the latter is Luxembourg where training for PhD students on research ethics and principles of good research practice, is conducted by the University of Luxembourg while the LARI (Luxembourg Agency for Research Integrity), an independent body, offers training for researchers from all career stages. How the training is conducted also differs. LARI offers highly interactive, face to face training, combining traditional and creative methods while the Finnish National Board on Research Integrity (TENK), for example, provides online courses. '"`UNIQ--references-000000A8-QINU`"'  +
The best practices include the supervisor creating a relaxed atmosphere, being open to communication and making themselves approachable for students. In that sense, it is useful when supervisors respect the ideas of open science, and share their knowledge and experience with the researcher/student they are supervising. A relaxed, yet professional communication could also be helpful in achieving these goals. If any issues occur, or if a supervised researcher/student makes mistakes, this should be resolved through suggestions and recommendations for improvement, rather than harsh criticism or stressing the student's failures. Students/early career researchers should be honest in communication and respecftul, while taking the supervisor's comments seriously and accepting criticism as a tool for improvement.  +
There is increased recognition that lecturing supervisors about responsible supervision may not be the most useful approach. Below are some innovative examples that integrate responsible research with responsible supervision. This list is far from comprehensive, but should serve as a starting point for exploration of the topic. First, Whitbeck described a group mentoring approach that was intended to support the discussion of research integrity in supervision. '"`UNIQ--ref-00000000-QINU`"'Besides, the research group was assisted in grasping the complexity of situations they may encounter that challenge the integrity of their research. Second, Kalichman & Plemmons have developed a workshop curriculum for supervisors.'"`UNIQ--ref-00000001-QINU`"' This workshop curriculum is explicitly designed to convey responsible research in the actual research environment, as opposed to a classroom environment that is separated from the lab. Thirdly, Anne Walsh and Mark Hooper from Queensland University of Technology office of Research Ethics and Integrity are developing a fully online training module that challenges supervisors to reflect on their own supervision and formulate concrete goals to improve their supervision skills, explicitly connected to responsible research.'"`UNIQ--ref-00000002-QINU`"' Their full training will be released late 2019. Finally, as part of the Academic Research Climate in Amsterdam project, an interactive training called ''Superb Supervision'' was developed. The training continuously alternates responsible research and soft skill development and participants meet in between to discuss their own dilemmas, see [http://www.amsterdamresearchclimate.nl/superb-supervision/ here]. The [https://www.eur.nl/en/about-eur/strategy-and-policy/integrity/scientific-integrity/dilemma-game Erasmus Dilemma game] lists a variety of example dilemmas from the perspective of the junior researcher as well as from the senior researcher. These example dilemmas may serve as useful conversation starters when discussing responsible supervision. '"`UNIQ--references-00000003-QINU`"'  
The field of gaming in RCR education is growing. A few examples include ‘‘Grants and Researchers’’, a card game designed to simulate the experience of ethical decision making within the context of academic research. Rules of the game are available [http://youtu.be/L4Jk84HlLN8 here] . Gaming Against Plagiarism (GAP) project developed three games that put the player in the central role of various issues in authorship, misconduct and intellectual property. More information on the games can be found [https://digitalworlds.ufl.edu/research-production/projects/gaming-against-plagiarism-gap/ here].  +
A review from 2010 defines three models of supervision'"`UNIQ--ref-0000006E-QINU`"': *a traditional model, a dyadic relationship between a supervisor and a student; *a group supervision, in which there is a relationship between a student and a supervisor, as well as a student and other students, and *a mixed model, which incorporates the two models and adds new technologies, such as online courses and teleconferences. There is a guide for supervision of doctoral students in healthcare that defines the roles and requirements for a supervisor.'"`UNIQ--ref-0000006F-QINU`"' Some of those include clarifying the students’ purpose, understanding the student and their context, guiding them methodologically, intellectually and administratively, facilitating their communication and later on, introducing them to the scholarly community. '"`UNIQ--references-00000070-QINU`"'  +
The European Code of Conduct (2017) specifies that training is necessary for researchers to improve supervision and mentoring. '"`UNIQ--ref-00000071-QINU`"' Please click [https://www.embassy.science/resources/the-european-code-of-conduct-for-research-integrity#entry:29:url here] for the European Code of Conduct. '"`UNIQ--references-00000072-QINU`"'  +
The Taskforce Scientific Integrity from the Erasmus University Rotterdam has made a number of recommendations for use of the game in their institution. One of the recommendations is that the game is used as a part of PhD training, as well as a faculty training session on research integrity.'"`UNIQ--ref-00000002-QINU`"' The dilemma game has also proved useful beyond its home institution, for example it is used as an exercise in [https://www.ucl.ac.uk/research/integrity/training-accordion/integrity-seminars research integrity seminars] provided by University College London and the PRINTEGER project has listed the dilemma game as one of the [https://printeger.eu/upright/toc/ learning modules] on their platform. As an interactive and educational exercise, the dilemma game is used in training sessions for research integrity trainers by the Horizon 2020 VIRT2UE project. '''Dilemma game app''' The developers have been adapting the card game into an app, in order to make the dilemmas not only more accessible, but also more relevant to a rapidly changing research environment and available for different purposes. With this app, researchers and teachers can use it individually, in a classroom game-mode and in a lecture mode, by connecting in a group. Moreover, users are now more regularly confronted with integrity dilemmas through notifications, with new dilemma’s added each month and the invitation to share own research integrity dilemma’s. This app is a great example of an inspiring initiative, since it serves different objectives: it is a usable tool for training purposes, creates ongoing awareness and supports research culture by facilitating discussion. The dilemma game can be downloaded as an application on [https://play.google.com/store/apps/details?id=nl.eur.dilemmagame&gl=NL Android devices] and [https://apps.apple.com/nl/app/dilemma-game/id1494087665 iOS]. The app has three modes: individual, group and lecture mode, allowing users to interact with the dilemma's in a variety of ways. You can also open the lecture mode in your [https://dilemmagame.eur.nl/ui/ browser], so you can show students the dilemma and their answers. '"`UNIQ--references-00000003-QINU`"'  
In the late 1990s, a large cross-national survey was conducted with aim of exploring young people’s opinions of their history education in Europe. They had to put themselves into the shoes of a young man or woman from the 15<sup>th</sup> century being forced into marriage and were given six options: -Refuse because it is inhuman, immoral and illegitimate to force someone to marry without real love; -Obey because good economy is more important for a family than passionate love between wife and husband; -Run away to a nunnery or a monastery because religious life is worth more than worldly life; -Consent because nearly all young people have married in accordance with their parents’ decisions; -Resist because it is the natural right of any individual to marry for love; -Obey because rebellion against the parents’ will is a rebellion against the law of God.'"`UNIQ--ref-000004DD-QINU`"' Respondents’ answers mostly showed their preference for rebellion “in the name of love and natural rights” and difficulties in accepting reasons for obedience (tradition, paternal power, economic reasons) common for 15<sup>th</sup> century mentality.'"`UNIQ--ref-000004DE-QINU`"' Most students were not able to put themselves in the shoes of young people that lived in the 15<sup>th</sup> century because the question presented to them was out of their time and context. If we expect from students to apply empathy, they should have more knowledge about the 15<sup>th</sup> century society and some insights into mentality of the people living in that time. Many students projected their own contemporary opinions, feelings and stereotypes to the 15<sup>th</sup> century young people.'"`UNIQ--ref-000004DF-QINU`"' Since empathy is something that can be learned and exercised,'"`UNIQ--ref-000004E0-QINU`"' contemporary history curriculums use it as one of the tools for “historical understanding”.'"`UNIQ--ref-000004E1-QINU`"' Several strategies that can be applied in history classroom to develop empathy among students are role-playing, structured debate, narrative-writing concerning issues historical figures confront,'"`UNIQ--ref-000004E2-QINU`"' history simulations, pro-and-con lists, examining films, novels and documentaries that provide “vicarious experiences”'"`UNIQ--ref-000004E3-QINU`"' and visits to historical sites.'"`UNIQ--ref-000004E4-QINU`"' These strategies can be applied particularly when some issue concerns a group or a nation unpopular with some or all students, or when an issue involves discrimination against a certain group.'"`UNIQ--ref-000004E5-QINU`"' It can also help understanding different cultures and improve communication and relations in multicultural societies.'"`UNIQ--ref-000004E6-QINU`"'  
Cookies help us deliver our services. By using our services, you agree to our use of cookies.
5.1.6