What We Want From Science
A 2005 study published in Nature journal of several thousand United States based scientists in their early-to-mid careers funded by the NIH shows that over one third of all scientists surveyed reported partaking in at least one NIH’s list of top-ten misbehaviors ranging from falsifying or “cooking” research data, to using another’s ideas without obtaining permission or giving due credit to changing the design, methodology or results of a study in response to pressure from a funding source. Important to note is the strong likelihood that the one-third valuation is a conservative underrepresentation of the truth, since survey data was self-reported.
The pressures of the world of academia and industry force researchers to “publish or perish.” Attempts to remedy this issue focus on training in the responsible conduct of research through programs such as CITI training. However, such certification programs may easily be completed by online cheating and collaboration, as participants mechanically complete the requirements with a “check the box mentality” rather than a desire to learn about the ethics of research. The certifications focus on “fixing” the individual’s behavior rather than promoting personal morals grounded in research integrity and respect for scientific research.
Over the past decade in particular, a culture of “junk science” has swept through prominent scientific publications. The 1989 claim by University of Utah chemists Stanley Pons and Martin Fleischmann elucidating the discovery of cold fusion, a form of nuclear fusion, without relevant equipment, is just one example of rising concern of the validity of modern research findings. Pons and Fleishmann decided to take their claim straight to the media rather than submitting it to a peer-reviewed journal for publication—an indication of questionable science. A direct pitch to the media, the attack that a powerful institution is trying to suppress the publishing author’s work, limited detections of scientific effect, anecdotal evidence, credibility based on evidence present for many centuries, discovery in isolated settings, and the proposition of new scientific laws to justify the claim suggest “revolutionary work” worthy of skepticism.
Large increases in funding over relatively short periods of time demand radical, “sexy” results. For example, the NIH and Department of Energy (DOE) invested $3.8 billion to sequence the human genome under the Human Genome Project (HGP). The milestone occurred two years ahead of the scheduled 2005 completion date, resulting in an economic impact of $965 billion between 1988 and 2012 in related research and private genomics industry work.
Nevertheless, most scientific research and work (regarding NIH and medical related research in particular) does not result in headline-worthy discoveries. Still, these incremental changes are what motivate the linear model that drives American science. A negative result is just as worthy as an affirmative. The trend in toning down respect for the Popperian falsification principle must be revived in scientific research evaluation.
There is an inextricable relationship between scientific professionals (i.e. research scientists, medical professionals, science-oriented academics) and decision-makers; however, the question of which scientific professionals should be influencing policymaking is an ongoing debate. Trusted professionals are referred to as experts, and the law relies primarily on expert opinion and testimony to make decisions in the courtroom all the way to Capitol Hill. Article 7 of the Federal Rules of Evidence explains opinion and expert testimony. Rule 701 delineates opinion testimony by lay witnesses. Lay witnesses are limited in their scope because their testimony is utilized only if it is of rational perception of the witness, if it clears understanding of fact determination, or is not based on scientific, technical, or specialized knowledge within Rule 702, which explains the parameters for the testimony by experts.
The law is dependent upon experts due to the fear that the lay public is not capable of grasping scientific issues. The feasibility of a jury to comprehend common scientific terms such as DNA (the molecule responsible for our genetic information) and genome (one copy of each DNA found in all cells of an organism) is likely, despite contradictory advice. Jurors are already required to do relatively advanced tasks such as process evidence, recount personal experiences, and form opinions as a trial progresses. Although there are evident complex issues that the lay public would require expert advice on, expert testimony must be based on appropriate data of reliable methodology and principle that are only used in relation to the facts of the case.
Nevertheless, expert testimony is largely partisan in America. There is no rule prohibiting parties from selecting their own experts. Rule 706 addresses the process of obtaining court appointed experts, including appointment, compensation and disclosure of appointment. Yet part (d) of the same Rule 706 explicitly states that there is nothing limiting parties from selecting their own experts. A Yale Law professor exclaimed that European judges who visit the United States experience “something bordering on disbelief when he discovers that we extend the sphere of partisan control to the selection and preparation of experts.” Dr. Leonard Welsh, a psychologist from Iowa who frequently testifies on behalf of the state, feels that his work is often compromising. Welsh often leaves the courtroom feeling as if he “needs a shower” to cleanse himself of the certain opinions the court demands him to make during convoluted cases. For this reason, many reputable professionals choose to not represent parties in the courtroom for ethical, moral, and professional reasons. After all, expert testimony is a lucrative business in America—Google spent nearly one million dollars in expert fees during the 2011 Oracle v. Google case.
With the rapid technological and scientific advances of the postmodern era, the “non-expert” public plays an increasingly more prominent role in shaping American legislation and policymaking. However, this phenomenon is not necessarily new; in 1897, the Indiana House of Representatives passed House Bill 246 without opposition, a bill that proposed changing the value of “pi,” the mathematical relationship between a circle’s circumference to diameter. The state senate rejected the bill, but such an initiative is representative of the deeply ingrained misrepresentation of science and technology in decision-making atmospheres. Psychologically, when people begin to learn about a new concept, their source of information directly impacts the opinions they form of the subject. Associate Professor Donald Braman of George Washington University Law School asserted that a person’s opinion can be drastically changed by altering where they first learned the information. In order to draft and implement the best sustainable policies surrounding scientific and technical matters, learners must absorb information from a variety of different sources. Kuhn suggests that although awareness eventually changes with time, slow communication hinders paradigm shifts even when transformative knowledge is readily available.
The presentation of scientific and technological concepts directly impacts perceptions of the public and policymakers alike. In several studies, focus groups have associated the field of synthetic biology, the laboratory-assembled formation of life through DNA structure assembly, with negative connotations because the word ‘synthetic’ has been recently used to contrast positively-viewed words such as ‘natural’ and ‘organic.’ Still, the credibility of a given claim is ultimately what is going to turn a proposition into a reality (or not). In this context, scientific credibility refers to the legitimacy of a claim as authoritative knowledge that gives science the power of a voice. Credibility encompasses social and cultural authority through combining the concepts of power, legitimacy, trust, persuasion, and dependence into a complex system capable of persuading even the most vehement opponents of an issue to reconsider their opinions. AIDS research in particular struggles to effectively address questions of credibility due to the issue’s inextricable association with controversy, politicization, and uncertainty. People expect results fast, and because experts have yet to remedy AIDS, the scientific credibility of AIDS research is diminishing as vocal opposition rises. Increasing not only scientific communication but public leadership through action could help.
Industry and universities have different missions and often express contradictory values and mutual distrust despite their tight relationships. Universities believe they have no choice but to maintain this tight relationship to industry for funding dependency, while industry depends on universities for their knowledgeable human capital and liberal, risk-taking setting that industry cannot afford. The license and patenting power struggle of industry-incentivized research conducted at universities led to the 1980 passage of the Bayh-Dole Act, which left intellectual property rights to research institutions. However, this Act had a double-edged effect. Even though policy governed industry-commercialized technologies, there was an increase in the commercialization of new federally funded technologies.
Similarly, a general acceptance exists that manufacturers and scientists should be responsible for product development, even though ten to forty percent of studied fields show that individual consumers regularly develop or modify products. Lead user theory empowers the “lead user” to be unconstrained by institutional bureaucracy. The theory recognizes the people are on the leading edge of market trends and have experiences that the majority of consumers will experience later, ultimately spurring premature innovation. Conversely, manufacturers do not have such strong ties to user uses, therefore leading to production targeted toward larger, more certain markets.
The comfort of contemporary technologically determinism and misconceptions of scientific and technological advancement delay the inevitable Kuhnian paradigm shift necessary to overcome societal stagnancy and complacency. The United States government must surpass the ideal that “throwing money” at an issue will yield sustainable solutions. Despite the common belief that America suffers from a future generation of leaders who are uneducated in STEM (science, technology, engineering, and mathematics), Hewlett-Packard has laid off roughly 55,000 workers since 2012, according to Business Insider. That number could reach 85,000. By strengthening the ties between the hard sciences and decision makers, both parties may identify the same flaws in the currently linear-fashioned science policy system, spurring collaboration in the quest for solutions.
Science and technology are powerful weapons for positive sociopolitical change. By questioning the ethos of science and the rule of law, science policy may progress more effectively and ethically.