Fog-laden Patrick's Point juts into the Pacific Ocean,
        home to centenarian redwoods presiding over the marine
        campground and hiking paths.  One path leads to a small
        cove beach, protected by cliffs.  The sandless Moonstone
        Beach is host to countless pilgrims, young and old, who
        squat to comb the millions of rocks for moonstone - pure,
        white, unblemished, smooth.  Their searches are mostly in
        vain; there is not a piece of moonstone in sight.  The
        beach, however, is littered with jade.  Had it been
        designated "Jade Beach," I would not have seen a piece of
        jade in sight.  Visitors were not looking for any beautiful
        sea rock; they were looking for moonstone, and thus missed
        the cornucopia of other treasures under their feet. 
      
      
        Anytime we prescribe an agenda, a goal, a desired outcome
        for our inquiries, we blind ourselves.  Looking for one
        result, we overlook another.  Complete objectivity is
        impossible, simply because even the most rational people
        approach their investigations with a set of questions, an
        agenda.  But isn't this how we define science - by seeking
        answers to particular questions?  Without questions, what
        "science" is left?  It is impossible, given the infinite
        mysteries our universe holds, to approach science without a
        guided destination.  You have to test something. 
        Science gets itself in trouble, however, when the agenda
        becomes more important than the data.
      
      
        Science is facing an objectivity crisis.  Digital media
        technology permits doctoring of images; and scientists have
        bitten into the forbidden fruit.  Dr. Hany Farid, an
        associate professor of computer science at Dartmouth
        College, specializes in the field of detecting image
        doctoring.  He laments that "[i]t used to be that you had a
        photograph, and that was the end of it - that was truth,"
        but that today, amidst the abundance of digital media,
        images need to be subjected to the same scrutiny as the
        written word.
         How does the scientific community establish a structure
        for evaluating objectivity and validity of its
        publications? 
      
      
        Science has attempted to devise procedures - the Scientific
        Method, for example - to ensure that questions in
        scientific research never become more important than the
        findings they produce.  The Scientific Method prescribes
        doses of observation and description, formulation of
        hypothesis, prediction of explanation based on hypothesis,
        and performance of experimental tests to verify the
        hypothesis.  It is science's rational, emotionally detached
        approach to investigation that prompts our society to hold
        science as the model of objectivity.  The scientific method
        is designed to systematically and objectively evaluate the
        accuracy of an observed phenomenon or explanation.  In an
        ideal, theoretical, emotionless world, this might work. 
      
      
        Many scientists, however, have agendas.  Scientists
        hypothesize, conjecture, anticipate, and hope; when results
        don't fit speculations, for many scientists, objectivity is
        threatened.  The scientific community developed several
        mechanisms for ensuring the objectivity of its
        publications.  One such mechanism, the peer review system,
        at least subjects potential articles to the scrutiny of
        colleagues.  The peer review system attempts to guarantee
        that distributed results uphold standards - namely, that
        they can be reproduced, and that they can be measured. 
      
      
        Though admittedly not perfect, for a while, the peer review
        system has improved the academic validity of scientific
        writing - methods, data, and their resulting publications. 
        In the ultra-competitive scientific world, many scientists
        relish the opportunity to nit-pick the methods and
        assumptions of their contemporaries.  During my own three
        summers in research settings, weekly lab meeting banter was
        always filled with hoots and hollers poking fun at an
        article received for peer review.  This nit-picking, though
        on the surface driven by competition, serves a valuable
        purpose in evaluating the methods and conclusions of
        potential publications. 
      
      
        Some, however, criticize the peer review system for its
        lack of objectivity, pointing out that scientists
        evaluating the work of other scientists, work they may have
        been directly competing against, lacks perspective.  Others
        attack the peer review system's lack of compensation for
        reviewers, arguing that unless busy scientists are
        compensated for their thorough efforts, there is no
        incentive to carefully consider a paper when reviewing. 
        Despite these flaws, however, the peer review system
        certainly has helped to maintain high standards in
        scientific journal publications. 
      
      
        Unfortunately, there are no "peers" to review image
        submissions; only a trained eye would know what signs of
        image doctoring for which to scan.  Major scientific
        journals - Cell, Nature,Science -
        utilize images to support and clarify the presentation of
        data.  Today, science is facing an objectivity crisis. 
        Images - pictures, Ultra Violet photographs of DNA gels,
        films of protein membranes - are undergoing subjective
        editing, in ways that are parallel to the ways in which
        words have been manipulated to influence interpretation of
        data. 
      
      
        Dr. Michael Rossner of Rockefeller University, executive
        editor of The Journal of Cell Biology, first
        realized that submitted images had been doctored when the
        journal began to require digital submissions of images for
        publication.  Unlike hard copies of images, digital images
        can be magnified with the click of a button, and any area's
        color compositions analyzed to yield the statistical
        likelihood that the combination of color pixels appeared
        naturally.  Since this policy change in 2002, over a
        quarter of all submitted digital images have failed to
        comply with the journal's image submission
        guidelines. 
        Rossner attributes the recent increase in image doctoring
        to the widespread availability of digital media devices,
        which has removed the technological barriers that
        previously kept images safe from human alteration or
        intervention.
      
      
        The prestigious journal Nature released a widely
        read article donning a catchy pop-culture reference, "CSI:
        Cell Biology," attempting to explain this recent uproar
        surrounding scientific image fabrication and its detection.
         The piece explains that "[m]ost alterations are harmless:
        researchers legitimately crop a picture or enhance a faint,
        fluorescently tagged protein," but that sometimes these
        innocent alterations "erase valuable data or raise
        suspicions of fabrication." 
      
      
        As the article points out, scientists doctor images for a
        wide variety of reasons.  Richard Sever, executive editor
        of the Journal of Cell Science in Cambridge, England
        explains that the majority of doctoring offenses are
        "junior people tidying up the image and not realizing that
        what they're doing is wrong."  A few authors, however, have been
        prosecuted for combining images of cells from several
        cultures and then assembling the images so they appeared as
        if all cells were growing in one plate.  Sever acknowledges several
        difficult questions relating to image doctoring in
        scientific publication.  First, many argue that research
        ethics and morals is an under-represented sector of science
        education.  Perhaps scientists are simply unaware of the
        potential consequences of their actions.  Secondly, in a
        field where doctoring offenses are performed by both
        innocent and malicious parties, how is it possible to
        differentiate between the two?  Since some forms of image
        doctoring are considered tolerable and even necessary and
        others as scientific fraud, the last and most controversial
        question asks where the line of acceptability be drawn.
      
      
        The variety of offenses by scientists submitting research
        data filled a complete spectrum from innocent and
        permissible changes, to malicious attempts to fabricate
        data.  For example, most scientific journals permit photo
        editing, such as changes to an image's brightness, or
        simple size crops.  Some authors of papers, however, would
        clean up the background of a DNA gel band (a test
        separating DNA fragments by length) with Photoshop's clone
        or rubber stamp tool for simply cosmetic purposes.  Some
        would enhance the presence of a band through contrast
        manipulation, which, though innocent in appearance, can
        erase valuable data.  Others, however, use these same tools
        to create entire new bands.
      
      
        Regardless of intentions, image doctoring is a malignant
        tumor in the scientific community; the honor code has
        failed under the pressures of "publish, or perish," the
        post-doc's devil.  The United States Office of Research
        Integrity tries to uphold standards of honesty and
        reliability in biomedical research.  In 1990, only two and
        one-half percent of all the office's allegations involved
        the doctoring of images in scientific papers; by 2001, the
        percentage had leaped up to twenty-six percent.  A new safeguard is
        needed. 
      
      
        The ever-increasing concerns over scientific image
        integrity have catalyzed a new field: scientific image
        forensics.  Experts use high-resolution enlargements of
        images, and mathematical algorithms to detail images for
        signs of doctoring.  The most common signs are areas of
        similar or identical color tones that, given their size,
        have a low statistical probability of occurring naturally. 
        Often when scientists attempt to remove background noise on
        a DNA gel, or eliminate fluorescent protein tags, they do
        so by borrowing a piece of nearby background, which
        produces these large areas with identical color toning. 
        Other signs are edges and boundaries that are either
        intentionally blurred, or incredibly sharp and
        un-realistic.  Dr. Farid received a grant from the Federal
        Bureau of Investigation to assist his research on verifying
        the authenticity of digital images.  Though many have
        attempted to develop screening processes, Dr. Farid
        approaches the quandary as simply a data-sorting and
        analysis problem, explaining that "[a]t the end of the day
        you need math" to determine beyond a reasonable doubt that
        the image has been doctored.  
      
      
        Ultimately, the responsibility for verifying the
        authenticity of scientific images falls to the journals. 
        Because many scientists have failed to remain objective
        when submitting images to support their research data, it
        appears that closer scrutiny is necessary. No journal,
        however, is anxious to ban image manipulation outright. 
        "CSI:  Cell Biology" explains that in many experiments,
        "researchers often have to adjust the relative intensities
        of red, green and blue fluorescent markers in order to show
        all three in a single image," which is considered an
        acceptable form of image manipulation.  Dr. Rossner has found the
        most widely acceptable compromise to date, publishing
        explicit guidelines for theJournal of Cell Biology,
        which, in essence, require that any image doctoring must
        not be part-specific.  In other words, lightening or
        darkening an image is acceptable so long as the alteration
        affects the entire image, therefore maintaining the
        original comparative ratios between areas of the image. 
        Katrina Kelner, a deputy editor of Science,
        commented of Rossner's guidelines that "[s]omething like
        this is probably inevitable for most journals."  The Journal of
        Cell Science anticipates releasing image manipulation
        guidelines within the next three months, and Nature Cell
        Biology now requires submission of the original digital
        file alongside any image submitted.  The only other
        doctoring guideline widely supported by editors (but not
        yet in place by a major international journal) is for
        authors to include a list of image adjustments made to any
        submitted image.
      
      
        Journal editors impose image alteration guidelines
        reluctantly.  As a whole, editors lament their necessity,
        but feel strongly that the alternative - unregulated
        publication of images alongside articles - could prove
        detrimental and destructive to the scientific community. 
      
      
        In addition to the cost of the fraudulent research itself,
        image doctoring as a method of falsifying data costs the
        scientific research industry billions of dollars.  Research
        within the scientific community is built cumulatively,
        scientists assuming that journal-published results from one
        research group are repeatable and therefore are a suitable
        platform from which to begin their own research.  These
        assumptions save the science community valuable time and
        money, enabling researchers to move forward in designing
        investigations, instead of repeating already-proven
        results. A false platform or foundation, however, might
        waste years of a scientist's career, as well as valuable
        research dollars, to the detriment of the scientific
        community as well as the public it serves.
      
      
        It is imperative that the scientific community as a whole
        unite to design a system for maintaining the integrity of
        its publications.  Whether oversight of images falls to the
        journals, or another institution, it is mandatory that
        scientists reclaim and protect the objectivity and
        integrity of the information sharing system.