Archive of previous NTS Skeptical News listings
Gary E. Schwartz, Ph.D.
"If it is real, it will be revealed; If it's fake, we'll find the mistake."
Motto, Human Energy Systems Laboratory Opening Quotation, The Afterlife Experiments
"I do not have control over my beliefs."
Ray Hyman, Ph.D.
Is it appropriate in science to ignore or dismiss important facts, consciously or unconsciously, in order to maintain one's preferred interpretations or beliefs?
Is it appropriate to ignore important historical, procedural, and empirical facts when drawing scientific conclusions about a body of research?
Consider the following scientific statement:
"When the total set of findings are considered, the simplest and most parsimonious explanation that presently accounts for the largest amount of the data - including the extraordinary observations or "dazzle" shots - is the survival of consciousness hypothesis."
Is this a subjective interpretation or "belief" of a scientist? Or, is it a reasoned conclusion based upon a systematic and complete review - not a selective review - of the data?
Most rational scientists agree that the credibility and integrity of a review of a body of research is that the review includes all the important information, not just the reviewer's favored information. Hyman's review titled "How Not To Test Mediums" is a textbook example of the selective ignoring or dismissing of historical, procedural, and empirical facts to fits one's preferred interpretation. The result is an inaccurate, mistaken, and biased set of conclusions of the current data.
Some might argue that Hyman has made what might be termed the ultimate reviewer's mistake: the selective ignoring and omitting of important information. Paraphrasing Hyman, "Probably no other extended review of psychic research deviates so much from accepted norms of scientific methodology as this one."
IMPORTANT NOTE FOR READERS: I will be speaking fairly forcefully below in my commentary concerning Hyman's reviewing tactics. I am taking a strong position here not because of the ultimate validity of the survival hypothesis - i.e. whether it is true or not - but because of the nature of scientific reviewing process itself.
What we term the "ultimate reviewer's mistake" cannot be condoned. It must be exposed and understood, regardless of the specific research area that is being reviewed or the specific person doing the reviewing. The specific research topic is less important than the process of reviewing the topic. My argument is not with Hyman as a person, but about the process by which he has written his review.
ACKNOWLEDGEMENTS: I thank a number of my colleagues who have graciously taken the time to provide me with useful feedback about this commentary. They include Peter Hayes, PhD, Katherine Creath, PhD, PhD, Stephen Grenard, RRT, Donald Watson, MD, Emily Kelly, PhD, Lonnie Nelson, MA, and Montague Keen. The opinions provided below are those of the author - they are not necessarily those of my colleagues.
Introduction and Overview
Ray Hyman is a distinguished professor emeritus from the Department of Psychology at the University of Oregon, who has had a longstanding career as a skeptic focused on uncovering potential flaws in parapsychology research. Hyman is well skilled in carefully going through the conventional checklist of potential sources of experimental errors and limitations in research designs.
Hyman's overall appraisal of the research conducted to date is implied by his conclusion: "Probably no other extended program in psychical research deviates so much from accepted norms of scientific methodology as does this one"
Is Hyman's summary conclusion based upon a thorough review of the total body of research? Or, does it reflect the systematic ignoring of important historical, procedural, and empirical facts - a cognitive bias used by the reviewer in order to maintain his belief that the phenomenon in question is impossible?
As I document in substantial detail below, Hyman resorts to consciously and/or unconsciously selectively ignoring important information that is inconsistent with his personal beliefs.
Selective ignoring of facts is not acceptable in science. It reflects a bias that obviates the purpose of research and disallows new discoveries. I have made the statement that the survival consciousness hypothesis does account for the totality of the research data to date. Of course, this does not make the survival hypothesis the only or correct hypothesis - my statement reflects the status of the evidence to date, not necessarily the truth about the underlying process. This is why more research is needed.
Note that I do not use the word "believe" in relationship to the statement. This is not a belief. It is an empirical observation derived from experiments.
It is correct that some of the single-blind and double-blind studies have weaknesses - we discuss the experimental limitations at some length in our published papers and The Afterlife Experiments book. However, these weaknesses do not justify dismissing the totality of the data as mistaken or meaningless. Quite the contrary, an honest and accurate analysis reveals that the data deserve serious consideration.
Our research presents all the findings - the hits and the misses, the creative aspects of the designs and their limitations - so that the reader can make an accurate and informed decision. What we strive for is seeking the truth as reflected in Harvard's motto "Veritas."
I appreciate Hyman's effort to outline some of the possible errors and limitations in the mediumship experiments discussed in my book The Afterlife Experiments. However, as Hyman emphasizes in his review, I do "strongly disagree" with his interpretations. The two fundamental disagreements I have with Hyman's arguments are:
1.Hyman has chosen to ignore numerous historical, procedural, and empirical facts that are inconsistent with his interpretive descriptions of our experiments, and
2.Hyman has chosen not to acknowledge the totality of the findings following Occam's heuristic principle as a means of integrating the total set of findings collected to date.
Before presenting a detailed commentary of Hyman's review, I provide a few samples where Hyman ignores key information that enables him to draw mistaken and biased conclusions - to prepare the reader for what lies ahead.
Selective Ignoring of Historical, Procedural, and Empirical Facts by Hyman: A Sampling
1.In his review, Hyman failed to mention the important historical fact that our mediumship research actually began with double blind experimental designs. For example, the published experiment referred to in The Afterlife Experiments book as "From here to there and back again" with Susy Smith and Laurie Campbell** was completed almost a year before we conducted the more naturalistic multi-medium / multi-sitter experiments involving John Edward, Suzanne Northrop, George Anderson, Anne Gehman, and Laurie Campbell. The early Smith-Campbell double-blind studies did not suffer from possible subtle visual or auditory sensory leakage or rater bias - and strong positive findings were obtained. Our decision to subsequently conduct more naturalistic designs (which inherently are less controlled), was made partially for practical reasons (e.g. developing professional trust with highly visible mediums) and partly for scientific ones (e.g. we wished to examine under laboratory conditions how mediumship is often conducted in the field). Hyman comes to a factually erroneous conclusion when he reports that double-blind experiments were initiated only late in our research program.
2.In an exploratory double-blind long distance mediumship experiment where George Dalzell (GD) was one of six sitters and Laurie Campbell (LC) was the medium, Hyman states "because nothing significant was found, the results do not warrant claiming a successful replication of previous findings." However, Hyman minimizes the fact that the number of subjects in this exploratory experiment was small (n=6). More importantly, Hyman fails to cite a important conclusion that we reached in the discussion: "If the binary 66% figure approximates (1) LC's actual ability to conduct double-blind readings, coupled with (2) the six sitter's ability, on the average, to score transcripts double-blind, the 66% figure would require only an n of 25 sitters to reach statistical significance (e.g. p < .01). Hyman fails to mention that NIH, for example, requires that investigators who apply for research grants calculate statistical power and sample size to determine what n is required to obtain a statistically significant result. Hyman would rather dismiss the fact that the highly accurate ratings obtained in the single-blind published study for GD were indeed replicated in the double-blind published study, than admit to the possibility that individual differences in sitter characteristics are an important and genuine factor in mediumship research.
3.It is curious that among the many examples of readings provided in The Afterlife Experiments book, one early subset / pattern of facts happened to fit Hyman nicely. It is true that mention of the "Big H," a "father-like figure", an "HN sound" would fit Hyman's father as it did the sitter's husband mentioned in the book. [Note - Hyman would have us dismiss his own words because I have not attempted to verify independently that Hyman's father's name was really Hyman.]. Curiously, Hyman choose not to report the fact that many other pieces of specific information also reported for the "Big H" did not fit Hyman but did fit the sitter precisely. Moreover, Hyman consistently failed to report scores of examples from readings reported verbatim in the book that were highly unusual and unique to individual sitters (e.g. John Edward seeing a decreased grandmother having two large poodles, a black one and a white one, and the white one "tore up the house"). The reason is because these numerous examples contradict the conclusion Hyman chooses to accept - that the information, by chance, could fit multiple sitters - that is, if we treat the information selectively.
4.Hyman's conclusion that experienced cold readers can readily replicate the kinds of specific information obtained under the conditions of our experiments is mistaken at best and deceptive at worst. Under conditions where (a) professional cold readers do not know the identity of the sitters [i.e. cheating is ruled out], and (b) cold readers are not allowed to see or speak with the sitters [i.e. cueing and feedback is ruled out], it is (c) impossible for cold readers to use whatever pre-obtained sitter specific information they might have obtained, and (d) impossible for cold readers to use their feedback tricks to help them get information from the sitters. At the two-day meeting I convened of seven highly experienced professional mentalist magicians and cold readers, they all agreed that they could not apply their conventional mentalist tricks under these strict experimental conditions. However, a vocal subset (Hyman was one of the three), made the claim that if they had a year or two to practice, they might be able to figure out a way how to fake what the mediums were doing. My response to this vocal subset was simple. It was "show me." Just as I don't take the claims of the mediums on faith, I don't take the claims of the magicians on faith either. Mentalist magicians who make these will have to "sit in the research chair" and show us that they can do what they claim they can do. Thus far, the few cold readers who have made these claims have refused to be experimentally tested. They have not been willing to demonstrate in the laboratory that they can't do what the mediums do under these experimental conditions; and at a later date, demonstrate that their performance can improve substantially with practice.
Below is a verbatim draft of Hyman's review. I have added detailed commentary (in blue text). My comments begin with the word "VERITAS" - to remind us of the facts. My goal is to place Hyman's comments in the larger context of the total history and findings obtained to date in our experiments. At the end of this commentary, I provide a summary that addresses the topic of scientific integrity.
Critiquing the Afterlife Experiments
[Review published in the Skeptical Inquirer [Jan-Feb, 2003] of The Afterlife Experiments: Breakthrough Evidence of Life After Death. By Gary E. Schwartz, Ph.D. with William L. Simon. Pocket Books, New York, 2002. ISBN 0-7434-3658-X. 374 pp. Hardcover, $25.]
Professor Gary Schwartz makes revolutionary claims that he has provided competent scientific evidence for survival of consciousness - even more extraordinary - that medium's can actually communicate with the dead. He is badly mistaken. The research he presents is flawed, and in numerous ways. Probably no other extended program in psychical research deviates so much from accepted norms of scientific methodology as this one.
VERITAS: Using Hyman's own words, his Summary is "badly mistaken." The review Hyman presents is "flawed," and " in numerous ways." Hyman's review is a selective and false presentation of the facts. For example, what Schwartz concludes is that the when the totality of the data are considered, the simplest and most parsimonious explanation that accounts for the largest amount of the data - including the "dazzle shots" - is currently the survival of consciousness hypothesis. We argue that the available scientific evidence justifies continued research into this profoundly important question. Only a selective and mistaken review of The Afterlife Experiments book allows Hyman to reach his over-generalized conclusions. Again, adopting Hyman's extreme words, "Probably no other extended review of psychic research deviates so much from accepted norms of scientific methodology as this one."
Gary Schwartz is professor of psychology, medicine, neurology, psychiatry, and surgery at the University of Arizona. After receiving his Ph.D. in personality psychology from Harvard University, he taught at Harvard and then at Yale University for 28 years as a professor of psychology and psychiatry.
VERITAS: I taught at Yale for 12 years, not 28 years. Even if one adds the years I was a graduate student and assistant professor of psychology at Harvard, the sum is 21 years, not 28 years. Sadly Hyman begins his review by misstating facts and making false statements - a characteristic that pervades his review.
He has published more than 400 scientific papers. He came to the University of Arizona in 1988 to do research on, among other things, the relationship between love and health. In 1993 he met Linda Russek and married her soon after.
VERITAS: I married Dr. Russek in 1997, four years after we met [Note - is four years "soon after" as Hyman states?]. For the record, I separated from Dr. Russek in 2001.
Linda was still grieving over the death of her father. Soon after she met Schwartz, Linda asked him, "Do you think it is possible that my father is still alive?"
That question triggered a research program to answer it and the more general question of survival of consciousness. At first the program was conducted in secret and then became public around 1997. Since 1997, Schwartz has reported a number of studies in which he and his coworkers have observed some talented mediums such as John Edward and George Anderson give readings to sitters in his laboratory. This work has attracted considerable attention because of Schwartz's credentials and position.
VERITAS: The work has attracted considerable attention also because of the nature of the findings and their potential implications. Also, the work began with Laurie Campbell in 1998, when our first double-blind studies were conducted. Our more naturalistic single-blind research with John Edward and others began in 1999.
Even more eye-opening is Schwartz's apparent endorsement of the mediums' claims that they are actually communicating with the dead.
VERITAS: What I have concluded is as follows, "When the complete set of findings are considered, in total, the simplest and most parsimonious explanation that accounts for the largest amount of the data - including the "dazzle shots" - is currently the survival of consciousness hypothesis." This is a factual statement. No other single hypothesis accounts for as much of the data as does the survival hypothesis.
For Schwartz this conclusion follows from the famous principle known as Occam's razor. Schwartz paraphrases Occam's principle as " All things being equal, the simpler hypothesis is usually the correct one."2 As Schwartz sees it, " The best experiments [supporting the reality of communicating with the dead] can be explained away, only if one makes a whole series of assumptions..." These assumptions include: 1) that mediums use detectives to gather some of their information; 2) that sitters falsely remember specific facts such as the names of relatives; 3) that the mediums are super guessers; 4)that mediums can interpret subtle cues such as changes in breathing to infer specific details about the sitter and her relatives; and 5) that the mediums use super telepathy to gather facts about the sitter's deceased friends and family. According to Schwartz, such assumptions create unnecessary complexity.
VERITAS: If the combined set of hypotheses listed above actually accounted for the totality of the findings, the complexity would be warranted - and we would say so. However, the fact is that these explanations, in total, still leave many findings unaccounted for - whereas the survival hypothesis addresses these "anomalous observations" simply and elegantly - hence the reason for keeping an open mind about the survival hypothesis.
"However, if we were to apply Occam's razor to the total set of data collected over the past hundred years, including the information you have read about in this book, there is a straightforward hypothesis that is elegant in its simplicity. This is the simple hypothesis that consciousness continues after death. This hypothesis accounts for all the data [p. 254]."
VERITAS: This is a factual statement - meaning, the survival hypothesis does account for all the data. Of course, this does not make the survival hypothesis the correct hypothesis - my statement reflects the status of the evidence to date, not necessarily the truth about underlying process. This is why more research is needed.
This book presents evidence from a series of five reports in which Schwartz and his associates observed mediums give readings to sitters "in stringently monitored experiments."
VERITAS: Fact - there are more than five experiments described in the book. Also, we have had skeptics regularly review our research, pointing potential limitations in given experiments.
Schwartz does admit that his experiments were not ideal. For example, only the very last in his sequence of studies used a truly double-blind format.
VERITAS: This is an egregious error of fact. Hyman's statement is simply false. The book clearly indicates that a number of the original studies with one of the mediums - Laurie Campbell - were conducted double-blind. Hyman (and others, such as Wiseman) typically ignores these early double-blind studies.
Yet he insists that the mediums, although often wrong, consistently came up with specific facts and names about the sitter's departed friends and relatives that the skeptics have been unable to explain away as fraud, cold reading, or lucky guesses. He provides several examples of such instances throughout the book.
VERITAS: Fact - we present numerous examples of these kinds of observations throughout the book. And yes, mediums are not perfect. They make mistakes. However, this does not mean that they are simply guessing or cold reading. One does not dismiss a difficult skill because the practitioners are not perfect.
These examples demonstrate, he believes, that the readings given by his mediums are clearly different from those given by cold readers and less gifted psychics. "If cold readings are easy to spot by anyone familiar with the techniques, the kinds of readings we have been getting," he asserts, " in our laboratory are quite different in character."
VERITAS: As will be clear below, cold readers do not practice their trade under conditions where they do not know the identity of the subjects they are reading and they are not allowed to see them or interact with them verbally.
Could It Be Cold Reading?
Now it so happens that I have devoted more than half a century to the study of psychic and cold readings. I have been especially concerned with why such readings can seem so concrete and compelling, even to skeptics. As a way to earn extra income, I began reading palms when I was in my teens. At first, I was skeptical. I thought that people believed in palmistry and other divination procedures because they could easily fit very general statements to their particular situation. To establish credibility with my clients, I read books on palmistry and gave readings according to the accepted interpretations for the lines, shape of the fingers, mounds, and other indicators. I was astonished by the reactions of my clients. My clients consistently praised me for my accuracy even when I told them very specific things about problems with their health and other personal matters. I even would get phone calls from clients telling me that a prediction that I had made for them had come true. Within months of my entry into palm reading, I became a staunch believer in its validity. My conviction was so strong, that I convinced my skeptical high school English teacher by giving him readings and arguing with him. I later also convinced the head of the Psychology Department where I was an undergraduate.
VERITAS: It is clear that Hyman cared about this topic, and his naïve belief was shattered (below - Hyman uses the word "shocked"). I tell a similar story about "Mr. Wizard" when I was a freshman at Harvard. I too witnessed self-deception and fraud first hand. It is one of the reasons I became a scientist.
When I was a sophomore, majoring in journalism, a well-known mentalist and trusted friend, persuaded me to try an experiment in which I would deliberately read a client's hand opposite to what the signs in her hand indicated. I was shocked to discover that this client insisted that this was the most accurate reading she had ever experienced. As a result, I carried out some more experiments with the same outcome. It dawned on me that something important was going on. Whatever it was, it had nothing to do with the lines in the hand. I changed my major from journalism to psychology so that I could learn how come not only other people, but also I, could be so badly led astray. My subsequent career has focused on the reasons why cold readings can appear to be so compelling and seemingly specific.
VERITAS: Hyman's sensitivity to why people can "be so badly led astray" has had a significant impact on his career and life. A primary reason why cold readings can appear to be so compelling and seemingly specific, and why people can be led astray, is when people selectively remember information that fit their wishes and beliefs, and ignoring facts that do not fit. Professor Loren Chapman calls this "illusory correlates." This is the very cognitive bias that Hyman has used, seemingly unconsciously, in his selective and mistaken review of The Afterlife Experiments book.
Psychologists have uncovered a number of factors that can make an ambiguous reading seem highly specific, unique, and uncannily accurate. And once the observer or client has been struck with the apparent accuracy of the reading, it becomes virtually impossible to dislodge the belief in the uniqueness and specificity of the reading. Research from many areas demonstrate this finding. The principles go under such names as the fallacy of personal validation, subjective validation, confirmation bias, belief perseverance, the illusion of invulnerability, compliance, demand characteristics, false uniqueness effect, foot-in-the-door phenomenon, illusory correlation, integrative agreements, self-reference effect, the principle of individuation, and many, many others.
VERITAS: I studied with Dr. Loren Chapman who conducted the original illusory correlate research, and Dr. Robert Rosenthal who conducted the original experimenter effect research. I have personally conducted research on self-deception. Hyman knows I am not naïve when it comes to these matters. This is why I am so sensitive to examining the totality of evidence, the "big picture."
Much of this is facilitated by the illusion of specificity that surrounds language. All language is inherently ambiguous and depends much more than we realize upon the context and non-linguistic cues to fix its meaning in a given situation.
VERITAS: Hyman's quote "I do not have control over my beliefs" that introduces this commentary is related to his awareness of his own state of mind regarding these issues. When Hyman states "All language is inherently ambiguous…." he reveals an important philosophical belief - not an empirical fact. If I say "My name is Gary Schwartz" or Hyman says "My name is Ray Hyman" - the meaning is not "inherently ambiguous." Hyman's over generalized statement about language reflects his own use of language (illustrated at numerous points below).
Again and again, Schwartz argues that the readings given by his star mediums differ greatly from cold readings. He provides samples of readings throughout the book.
VERITAS: This is a factual error made by Hyman. I conclude that cold reading cannot explain the totality of the findings because when the opportunity to communicate with the sitters is removed - in multiple studies where the mediums never see or speak with the sitters - highly accurate information is still obtained. "Again and again" - to use Hyman's extreme words - Hyman misses this critical point.
Although, these samples were obviously selected because, in his opinion, they represent mediumship at its best, every one of them strikes me as no different in kind from those of any run-of-the-mill psychic reader and as completely consistent with cold readings.
VERITAS: "Everyone of them" is a very strong statement by Hyman. Sadly, Hyman's memory is selective and frequently inaccurate. For example, he forgets that the very first reading that Suzanne Northrop conducted involved her asking only five questions of the sitter. Nonetheless, she generated more than 130 pieces of information - including initials, names, causes of death, historical facts, descriptions of personal appearance, etc. - that were later scored by the sitter as being over 80%. This is not how cold readers perform their craft - and Hyman knows this.
In August, 2001, Schwartz assembled a panel of seven experts on cold reading, including me, to instruct him on the topic. We were shown videotapes of Suzane Northrup and John Edward giving readings in his laboratory. Most members of the panel were openly sympathetic to Schwartz's goals and program. Yet we all agreed that what we saw Northrup and Edward doing was no different from what we would expect from any cold reader.
VERITAS: Again, Hyman's memory is sadly selective and inaccurate. As mentioned above, at the two day meeting I convened of seven highly experienced professional mentalist magicians and cold readers, they all agreed that they could not apply their conventional mentalist tricks under these strict experimental conditions [no knowledge of the sitter's identity, and no verbal or non-verbal visual or auditory cues / feedback]. However, a vocal subset (Hyman was one of the three), made the claim that if they had a year or two to practice, they might be able to figure out a way how to fake what the mediums were doing under these experimental conditions. My response to this vocal subset was simple. It was "show me." Just as I don't take the claims of the mediums on faith, I don't take the claims of the magicians on faith either. Mentalist magicians who make these claims will have to "sit in the research chair" and show us that they can do what they claim they can do. Thus far, the few cold readers who have made these extreme claims have refused to be experimentally tested. They have not been willing to demonstrate in the laboratory that they can't do what the mediums do under these experimental conditions; and at a later date, demonstrate that their performance can improve substantially with practice.
I am sure that Professor Schwartz will strongly disagree with my observation that the readings he presents as strong evidence for his case very much resemble the sorts of readings we would expect from psychic readers in general and cold readers in particular. This disagreement between us, however, relies on subjective assessment. That is why we have widely accepted scientific methods to settle the issue.
VERITAS: As stated above, it is Hyman - not me - who is being subjective and is selectively ignoring the scientific procedures and findings. Hyman's claim that "cold reading" procedures can "explain" the findings do not and can not apply to the many "sitter silent" experiments we have conducted. Some of these "sitter-silent" experiments were single-blind, others were double-blind [conducted early on as well as currently]. Hyman not only does cold reading a disservice, but he presents an over generalized and false picture of cold reading when he fails to appreciate this fact.
That is why it is important, especially for the sort of revolutionary claims that Schwartz wants to make, that it be backed up by competent scientific evidence. Throughout this book, Schwartz implies that he has already provided such evidence. This, as I will explain, is badly mistaken. The research he presents is flawed. Probably no other extended program in psychical research deviates so much from accepted norms of scientific methodology as does this one.
VERITAS: Hyman is correct that some of the single-blind and double-blind studies have weaknesses. We discuss them in our papers and book. However, these weaknesses do not justify dismissing the totality of the data as mistaken meaningless. Quite the contrary, an honest and accurate analysis reveals that the data deserve serious consideration. Unlike Hyman's review which is "flawed" because it is selective in what he presents and ignores, our research presents all the finding - the hits and the misses, the creative aspects of the designs and their limitations - so that the reader can make an accurate and informed decision.
NOTE: Time and again, I return to the issue of "ignoring" facts and failing to consider the totality of the data. The reason is because Hyman consistently makes this ultimate mistake, and it is essential that I point out the numerous illustrates of this highly replicated mistake.
Is the Research Fundamentally Flawed?
Although never going so far as to claim his research methodology is ideal, he apparently believes it is adequate to justify his conclusions that his mediums are communicating with the dead. He writes, "skeptics who claim that this is some kind of fraud the mediums are working on us have nonetheless been unable to point out any error in our experimental technique to account for the results (p xxii)."
VERITAS: What this particular sentence refers to is that no errors in our experimental technique justify the explanation that all the data can be explained as fraud. Hyman himself reluctantly admits that fraud is a highly unlikely explanation for the totality of the findings conducted in this research. Hyman is over generalizing the context of this sentence when he implies that this sentence, by itself, "justifies" my conclusion that our mediums are "communicating with the dead." What this sentence justifies is our conclusion - which is also Hyman's - that fraud is not a plausible explanation for what our mediums are doing.
Later he asserts, " the data appear to be real. If there is a fundamental flaw in the totality of the research presented in these pages, the flaw has managed to escape the many experienced scientists who have carefully examined the work to date." These statements perplex me greatly.
VERITAS: Hyman says these statements "perplex" him. Let me explain. The key phrases in the sentence are as follows: "a fundamental flaw" that "can account for the totality of the research." The term "a flaw fundamental" refers to a single flaw. Hyman's "flaws" listed below each apply to only a small subset of the findings; they do not, individually, account for the totality of the findings. We point out most of these potential flaws ourselves, and with each experiment, we address them (and eliminate them) over time. No one has come up with an additional single flaw, not on Hyman's list, that can make the findings go away - either all the findings, or even a subset of the findings that would bring the evidence down to zero. Hence, no single flaw has been discovered that can account for the totality of the findings - not fraud, not cold reading, not rater bias, not chance, etc….
I have carefully itemized not one, but several "fundamental" flaws in Schwartz's afterlife experiments. I confronted Schwartz with this listing of flaws at two public meetings where we shared the same platform. I also brought them up again at the panel on cold reading that he convened. The other members of the panel also pointed to flaws. And Wiseman and O'Keeffe3 pointed to serious problems with Schwartz's first two published studies in the areas of judging bias, control group biases, and sensory leakage.
VERITAS: These are the same "flaws" that we considered from the very beginning of our research. Hyman has never given us any potential flaws that we were not previously aware of. I have told Hyman that we were well aware of these possibilities from the beginning (which is why we began our research program with double-blind experiments). Hyman ignores (or selectively forgets) this fact. Remember, Hyman ignores the early double-blind studies that addressed these potential explanations and ruled them out in the experimental designs. Positive findings were obtained in these early studies that eliminated judging bias, control group bias, and sensory leakage as potential problems. Since we already knew that positive findings could be obtained in double-blind studies, it was reasonable to consider conducting some more naturalistic studies with new mediums and then tightening the controls if and when positive findings were revealed with the new research mediums (e.g. John Edward). The most recent studies again rule these explanations out. Positive findings are still obtained (see below).
I would have to make this review almost as long as Schwartz' book to explain adequately each flaw.
VERITAS: We would have to take Hyman's word on this extremist claim.
Because any one of these flaws by itself would suffice to invalidate his experiments as acceptable evidence, I will only discuss only a few of these in this review.
VERITAS: The statement "Because anyone of these flaws by itself would suffice to invalidate his experiments as acceptable evidence…." is grossly in error. The statement is exemplary of the ultimate reviewer's mistake. The truth is, different experiments ruled out different flaws. No single experiment ruled out all the flaws; but not single flaw ruled out all the data from a given experiment. Most importantly, the fact is, no single flaw "by itself" rules out the totality of the findings. Hyman looses all credibility as a purportedly unbiased reviewer when he makes over generalizations such as "everyone of them" or "these flaws by itself would suffice to invalidate his experiments." I hypothesize that Hyman may not be conscious of the fact that he makes these kinds of egregious over generalizations and extreme statements that are not consistent with the facts.
First, I will list here the major types of flaws in the experiments described in his first four reports (I will deal with the fifth report separately below):
VERITAS: This ignores the double-blind studies conducted prior to the more naturalistic series.
1. Inappropriate Control Comparisons
2. Inadequate Precautions Against Fraud and Sensory Leakage
3. Reliance on Non-Standardized, Untested Dependent Variables
4. Failure to Use Double-Blind Procedures
5. Inadequate "Blinding" Even in What He Calls "Single Blind" Experiments
6. Failure to Independently Check on Facts the Sitters Endorsed as True
7. Use of Plausibility Arguments To Substitute for Actual Controls
VERITAS: We will consider the severity of these potential limitations below.
The preceding list refers to defects in the conduct of the experiments and in the gathering of the data. Other very serious problems attach to the way Schwartz interprets and presents the results of his research. These include:
8. The Confusion of Exploratory with Confirmatory Findings
9. The Calculation of Conditional Probabilities That Are Inappropriate and Grossly Misleading
10. Creating Non-falsifiable Outcomes by Reinterpreting Failures as Successes
11. Inflating Significance Levels By Failing To Adjust for Multiple Testing and by Treating Unplanned Comparisons as if they were planned.
VERITAS: We will consider the true severity of these potential eleven limitations below.
Other problems involve failure to use adequate randomization procedures, using only sitters who are predisposed to the survival hypothesis, inappropriate statistical tests, and other common defects that plague new research programs.
VERITAS: Note Hyman's sweeping generalizations in the above statements. Metaphorically, Hyman is waving his magic "over generalization" wand, and he is expecting that the totality of the data will simply disappear. But this is not a magic act. Hyman is not doing palm reading. This is science. Hyman's use of false over generalizations need to be exposed and stopped.
Even if the research program were not compromised by these defects, the claims being made would require replication by independent investigators. Perhaps Schwartz' most serious misconception is his attempt to shift the burden of proof from himself to the skeptics.
VERITAS: I agree with Hyman for the need for independent replication. Part of the reason for publishing exploratory and confirmatory studies - and The Afterlife Experiments book - is to encourage other scientists to be brave enough to conduct studies themselves. However, many scientists will refrain from conducting independent replications if they fear that they will subjected to selective and biased reviewing of the data they publish.
The worst mistake made by Schwartz and his colleagues was to publish the results they have obtained so far. Instead, they should have first tried to gather evidence for their hypothesis that would meet generally accepted scientific criteria. By submitting their very inadequate studies to public scrutiny and by demanding that skeptics "explain away" their defective data, they have lost credibility.
VERITAS: The truth is, the research does meet "generally accepted scientific criteria." No single study is perfect, but the data are important enough to warrant sharing the results as the program unfolds. The findings can always be dismissed by those who choose to dismiss them (as Hyman does summarily). At the 25th meeting of CSICOP, Hyman said he saw no reason to consider any of the data until a multi-center double-blind study was completed. That is his choice. However, if this strict criterion was required in medicine, for example, no double-blind drug trials would ever be conducted - because the exploratory and single-blind experiments would be dismissed in the first place. The fact is, it is published exploratory and single-blind studies that justify spending the time and money to design and conduct major double-blind experiments. The "worst mistake" we could make would be to keep these data secret.
In addition, the journals that did accept these studies for publication and Schwartz's panel of Friendly Devil's Advocates have also suffered greatly in credibility.
VERITAS: Hyman fails to mention that in the discussion sections of each of the published papers, the majority of his list of potential flaws - i.e. the subset appropriate to a given study - are mentioned as possible alternative explanations, as well as the plausible reasons for proposing that they are insufficient to account for the findings Honest disagreement about interpretations of data are part of nature of the scientific process. Hyman also ignores the fact that the papers were carefully peer reviewed by scientists who know the literature and the experimental challenges.
Schwartz' Inadequate and Inappropriate Response to Criticisms
Schwartz's responses to criticisms such as those made by Wiseman and O'Keeffe obscures rather than clarifies matters.4 For example, in regards to his failure to provide safeguards against sensory leakage, he complains that Wiseman and O'Keeffe "curiously did not mention that we were fully cognizant of such issues and were actively researching them at the time the Schwartz et al. paper was published." The fact that the researchers were aware that they had not provided adequate safeguards against sensory leakage does not in any way make their data more acceptable. Indeed, if they were aware of how to properly control for this flaw, it is even more inexcusable that they failed to do so. Why did they publish data they knew to be compromised and try to pass them off as legitimate science?
VERITAS: Again, Hyman makes a gross and egregious over generalization. In some of the experiments critiqued by Wiseman and O'Keefe (e.g. the Canyon Ranch experiment that included a sitter-silent condition), it was possible that non-verbal auditory cues were operating (e.g. subtle sighs, breathing changes, movement in the chair) because the mediums and sitters were located in the same room. Note that in later studies, these subtle auditory cues were eliminated through long distance readings hundreds or thousands of miles apart, and positive findings were still obtained. Where we (the ultra-skeptics and me) differ is in interpreting the potential role that the subtle auditory cues could have played in the earlier studies. Is there any experimental evidence indicating that subtle breathing cues can provide specific and detailed information about initials, names, causes of death, historical facts, etc.? The fact is, no such evidence exists. More importantly, when the subtle cues are eliminated completely, positive findings are still obtained. Logic dictates the conclusion that "Sensory leakage" is not a plausible explanation for the findings. Hyman, plus Wiseman and O'Keefe, ignore this fact.
Indeed, Schwartz actually states that he deliberately allowed for some sensory leakage to see if "the remaining subtle cues" could explain the subsequent accuracy of the mediums' statements.
VERITAS: That's factually correct. When we began our semi-naturalistic studies, we wanted to see if the progression of removing cues in later experiments eliminated the findings. The findings did not go away.
He also states that he wanted to begin with "a semi-naturalistic design...to develop a professional relationship with the mediums..." If, in fact, this was his rationale for using an inadequate design, then he should have treated the study as a preliminary probe to see if the mediums could work under laboratory conditions. Such a preliminary or pilot study, however, should then be followed up with a formal, properly conducted experiment. Knowing how to properly control for sensory leakage in no way licenses the publishing of flawed data to support a hypothesis.
VERITAS: Our long-distance studies do " properly control" for the sensory-leakage question - and Hyman knows this. Positive findings do not disappear. So what is Hyman's point? Why does he persist in his belief in sensory leakage? Why can Hyman not "control his beliefs" as a function of the research data that are obtained?
In defending himself against the charge of sensory leakage, Schwartz uses another tactic that violates acceptable scientific conduct. He tries to shift the burden of proof onto the skeptic.
"Skeptics who speculate that `cold reading' can achieve similar results have a responsibility to show that identical findings can be obtained under the conditions used in the Schwartz et al. research (e.g. the single-blind sitter-silent condition that effectively rules out pre-experimental information and verbal feedback). We welcome such experiments.)"
Sorry, Professor Schwartz. The skeptics and the scientific community have no responsibility to show anything until you provide them with data collected according to well-established and acceptable standards.
VERITAS: If an ultra- skeptic wishes to make the claim that cold readers can do as well as mediums under semi-naturalistic conditions, it behooves them to show us evidence that this is indeed true. Of course, semi-naturalistic conditions are not the ideal experiment. But it is quite unclear if cold-readers can even do as well as mediums under these "looser" experimental conditions. In fact, as mentioned above, under conditions where cold-readers are not allowed to know the identities of the sitters and are not allowed any visual or verbal communication with the sitters, cold readers tell us - repeatedly - that this eliminates most of their tricks. Anyone who reads books on cold reading can come to that conclusion for themselves. However, Hyman - a purported expert on cold reading - fails to appreciate this fact. The truth is, if a critic wishes to propose a possible alternative explanation, he or she should ideally understand what they possible alternative explanation means - and how it can be tested.
The responsibility is yours to first provide us with evidence for your hypothesis of survival of consciousness that is gathered according to the appropriate scientific standards which include controlling for sensory leakage; devising dependent variables that are relevant, reliable and valid; and using control comparisons that are meaningful.
VERITAS: The issue here is not the survival of consciousness hypothesis - the issue is Hyman's (and others) claim that cold readers can do what the mediums can do. That is the claim made by the ultra-skeptics. If cold readers can't do what the mediums do, then the cold reading argument should be dropped, plain and simple - regardless of whatever the ultimate correct explanation is for what the mediums are actually doing. Hyman seems not to understand or accept this fundamental fact.
Schwartz's rejoinders to Wiseman and O'Keeffe's other two topics of criticism are even more disturbing. His response to the charge of possible judging bias is that, "The purpose of the original Schwartz et al. experiments (2001) was not to rule out possible rater bias, but to minimize it."
VERITAS: One can not "rule out" rater bias when one is having laymen rate information. Hyman ultimately speaks to this point toward the end of this review when he critiques one of our recent double-blind experiments. All one can do experimentally is minimize judging bias. Importantly, a significant subset of the information does not involve rater bias. If you son's name is Michael, and he shot himself, there is no rater bias when a rater scores statements like "the initial M", " the name Michael", or "he committed suicide" as rater bias. If there is rater bias, the bias comes from Hyman and Wiseman and O'Keefe themselves, who would rather over generalize and dismiss all the data as rater bias than discern which portions of the data are susceptible to rater bias, and which are not. Fortunately our sitters are not as biased as Hyman and Wiseman and O'Keefe appear to be.
He again tries to shift the burden of proof to the skeptic, by arguing that it is implausible to speculate that his sitters would exhibit rater bias on such things as names, relationships, and the like. Indeed, it is highly plausible to me that some sitters might acquiesce to statements that are demonstrably false.
VERITAS: Yes, some sitters might acquiesce under pressure. But this is highly unlikely to apply to the sitters we carefully selected to participate in our studies. They were under the scoring microscope. The scoring sessions were often video taped. They knew they would have to defend their ratings - to scientists and the media. These were not people picked at random and given vague statements from palm reading, for example. If they acquiesced at all, it was to provide conservative ratings. Again, Hyman mistakenly over generalizes.
However, science exists as a way to avoid arguments over plausibility. Minimizing rater bias is not the same as precluding it. If he wants to claim scientific acceptance for his evidence then he has to gather the data under conditions that eliminate or adequately correct for such bias.
VERITAS: Hyman rejects all the rating data this way - even the double-blind data (described below). However, as you will see, Hyman does not reject his own personal experience using these same criteria (see below).
Even worse is his rejoinder to the claim that he used an inappropriate control group. "The purpose of the original...experiments was not to include an ideal control group, but rather to address, and possibly rule out (or in) one possible explanation for the data-i.e. simple guessing."
VERITAS: Control groups are designed to address specific questions. Different control groups address different questions. This is normal scientific practice. Hyman knows this. We did not have the funds to run multiple control groups - we asked a simple question about simple guessing, and obtained a straight forward answer.
This last statement is both confusing and wrong. I suspect that Schwartz means by "an ideal control group" one made up of individuals who are the same age and have the same sort of experience as his mediums.
VERITAS: Hyman's suspicion of what I mean is in error. An ideal control group, to me, would not only match for age and sex, but also for conditions of the data collection (some of which he addresses below). For example, I would ideally put naïve individuals in the actual experiment, and see how well they could guess information. As mentioned above, this is costly - and funding for this kind of research is extremely limited.
Since his actual control group consisted of undergraduate students who had no prior experience as mediums, the group was obviously not ideal in this sense. However, what Wiseman and O'Keeffe are criticizing is that this control group in no way provides a proper comparison or baseline for the "accuracy ratings" of the mediums by the sitters. This is for the simple reason that the control group was given a task that differed in very important ways from that of the mediums.
VERITAS: Again, Wiseman and O'Keefe and Hyman ignore many important facts that contradict their sweeping dismals. For example, in the experiment, all five mediums reported a deceased son. None reported a deceased daughter. The mediums were 100% accurate for "son-dead" and 100% accurate for "daughter- living." The conditional probability is .5 * .5 * .5 * .5 * .5 for the deceased son and .5 * .5 * .5 * .5 * .5 for the living daughter. Our question was, "What is the probability of lay individual guessing this pattern by chance?" If we give 70 undergraduates the questions "Is the son alive? - yes or no?" and "Is the daughter alive? - yes or no?" what is their guessing rate? It should be 50% and 50% (our data confirmed this obvious point).
The cold reader will retort - however, if you knew the odds, you would guess that males are more likely to die than females. True. Cold readers should hopefully do better on this example. However, our question was not what is the guessing rates of cold readers (we did not have a large sample of them available), but what was the guessing rates for an intelligent group of individuals? This is a useful piece of information (i.e. it rules out simple guessing of non-cold readers as an explanation).
There is no way that the results from this control group could provide a comparison or baseline for simple guessing.
VERITAS: The phrase, "There is no way" is another extreme and false statement by Hyman. Because a control is not ideal does not mean that it is worthless - recall the binary son-daughter data provided above.
The mediums are free to make statements about possible contacts, names, relations, causes of death, and other matters. In the earlier experiments they were given "yes," and "no" replies from the sitters and in later experiments they typically began a segment without feedback and then went through an additional segment with feedback. The sitters were free to find matches within the output of the medium to fit their particular circumstances. Later the sitter was given a transcript of the entire reading and rated each statement for how accurately it applied to her situation. The statements that got the highest rating were counted as hits. The proportion of such hits varied from approximately 73% to 90% in the earlier experiments and somewhat lower in the later ones.
VERITAS: The paragraph above is mostly correct. Note the range was actually above 95% in the earlier studies. Hyman should cite the data accurately. Once a sufficient number of reviewer errors are observed, the accuracy and credibility of the reviewer becomes suspect. Science requires accuracy of details. Hyman also ignores that one medium asked only 5 questions and generated over 130 pieces of information. The "feedback" hypothesis clearly does not apply to her data.
In contrast, the control subjects were given a series of questions based on a reading given to their first sitter. Statements from the readings were converted into questions that could be answered in such a way that the answer could be scored correct or incorrect. For example, if the medium had correctly guessed the cause of the sitter's mother's death, a question given to the controls might be, "What was the cause of her mother's death?" Schwartz and his colleagues report that the average percentage of correct answers by the controls was 36%. Because the "accuracy" of the mediums was much higher, the researchers conclude that the mediums had access to true information that cannot be explained away as guessing.
Wiseman and O'Keeffe correctly point out that this is an inappropriate comparison. Although Schwartz claims that, if, anything, the controls had an advantage over the mediums, the use of the results for the control groups as a baseline for the mediums is completely meaningless.
VERITAS: Again, Hyman makes a mistaken over generalized statement. He says "Completely meaningless." But this ignores, for example, both the binary as well as detailed information about the deceased son - who was the primary deceased person of interest for this sitter (written on paper before the experiment began). Hyman and I agree that the guessing procedure used was not perfect, but that does not mean that one should throw out all the information. The take home message is the consistent use of over generalization and "complete dismissal" as a cognitive style used by Hyman and Wiseman and O'Keefe.
Wiseman and O'Keeffe provide reasons why. In addition to the reasons they give, a more fundamental one is that the score for the controls does not involve subjective ratings by the sitters while the accuracy scores for the mediums depend entirely upon the judgment of these sitters.
VERITAS: Once again, over generalization is present in Hyman's statements. Are ratings of the initial "M," the name "Michael," and the statement" he committed suicide" to be labeled and dismissed as "subjective ratings"? Do the accuracy scores depend "entirely upon the judgment of these sitters" as Hyman concludes? Over generalized and extreme words used by Hyman, such as "entirely," show a lack of discernment by Hyman as well as an apparent bias against these kinds of data.
We have no idea how well the mediums could do if given the same task as the controls. I strongly suspect they could not perform any better.
VERITAS: Here Hyman and I agree on the prediction. However, we do not agree on the meaning. It is probable that mediums can not "guess" information if given a questionnaire-type guessing task. In fact, the mediums themselves claim they can not make guesses under these experimental conditions - the experiment has yet to be performed. Presuming that both the mediums and Hyman are correct in their prediction here, what is the implication? The implication is that mediums are not engaged in a simple guessing process when they are allowed to let information come to them intuitively. Hyman forgets that this is our basic conclusion.
The accuracy score for the medium is completely dependent on the subjective decisions of the sitter.
VERITAS: Again, note that "completely dependent" is an over generalized and extreme statement. Remember, if GD, a sitter, says "My Aunt's name is Alice" - this is not a "subjective decision" in the way Hyman implies it is.
The very first example of a reading provided in this book begins as follows:
The first thing being shown to me is a male figure that I would say as being above, that would be to me some type of father image....Showing me the month of May....They're telling me to talk about the Big H-um, the H connection. To me this an H with an N sound. So what they are talking about is Henna, Henry, but there's an HN connection.
The sitter identified this description as applying to her late husband, Henry. His name was Henry, he died in the month of May and was "affectionately referred to as the `gentle giant.'" The sitter was able to identify other statements by the medium as applying to her deceased spouse.
VERITAS: Thus far, what Hyman says here is correct. However, I underscore "the sitter was able to identify other statements by the medium as applying to her deceased spouse." I mention this because Hyman later ignores these data when he applies himself to a subset of the data.
Note, however, the huge degree of latitude for the sitter to fit such statements to her personal situation.
VERITAS: Note the extreme language "huge degree" used by Hyman here, again inappropriately.
The phrase "some type of father image" can refer to her husband because he was also the father to her children. However, it could also refer to her own father, her grandfather, someone else's father, or any male with children. It could easily refer to someone without children such as a priest or father-like individual-including Santa Claus.
VERITAS: Of course. The statement "some type of father image" in isolation can have multiple meanings. But the other statements, in context, were important. The month of May, the Big H, and the HN connection fit her husband - and no one else that she knew. It is the pattern or cluster of information that matters here, not just a single item. Is this fact too complex for Hyman to appreciate?
It would have been just as good a match if her husband had been born in May, had married in May, had been diagnosed with a life-threatening illness in May, or considered May as his favorite month.
VERITAS: Again, in theory that's true for the isolated statement "May." However, it is the pattern or cluster that matters (see below).
The "HN" connection would fit just as well if the sitter's name were Henna or her husband had a dog named Hank.
VERITAS: Once again, of course -taken in isolation. Hyman considers these items in isolation just as he considers the experiments in isolation. He ignores the patterns or clusters of the findings - be it at the micro level - item by item - or the macro level - experiment by experiment. The disconnected information processing cognitive style used by Hyman is the same.
Schwartz concludes that, "No other person in the sitter's family fit the cluster of facts `father image, Big H, Henry, month of May' except her late husband, Henry." Of course not! If that person, or any other, also found a match for their personal life, it too would be unique.
VERITAS: No, if two people had fit the description, the sitter would have said so, and we would have recorded this as so. We would have not used the word unique - we would have stated that two people fit the pattern. However, the fact was, for this particular sitter, only one person fit this description.
When I put myself in the shoes of a possible sitter and try to fit the reading to my situation, I can find a good fit to my father, who was physically large, whose last name was Hyman, and for whom, like any human on this planet, experienced one or more notable events in the month of May. Other things in the reading also can easily be fitted to my father.
VERITAS: This happens. Patterns of information, particularly when they are general "e.g. The Big H" can sometimes fit more than one person. So what? The issue is, what is the probability that this actually happens for a sample of subjects? My father's name was Howard. But he was 5'2." This particular cluster of information is a better fit for Hyman's father than my father.
When Hyman says "and for whom, like any human on this planet, experienced one or more notable events in the month of May" he is generally correct. But again, this is stated in isolation. The sitter's deceased Husband, the Big H, whose name was Henry, died in May. Death is quite a "notable event."
Neither the original sitter nor anyone else would fit these cluster of facts.!
VERITAS: What we said was that the total pattern of information generated by the medium for this reading did not fit, as highly, any other person we tested in that study. That is all we said. Obviously, Hyman's personal facts - taken in total - will ultimately best fit him. The question is, "When the medium generates clusters of information, do they best fit the specific sitter being read?" That is the scientific question.
Schwartz makes much of the fact that the cluster of facts that a sitter extracts from a reading tend to be unique for that sitter. He even calculates the conditional probabilities of such a cluster occurring just by chance. Naturally, these conditional probabilities are extremely low-often with odds of over a trillion-to-one against chance.
VERITAS: It is meaningful when Hyman make the statement "Naturally" as applied to these conditional probabilities. Is Hyman accurately reading what we did and understanding what it means? I suggest no and no. (see below)
The "accuracy" score for the medium, as calculated by the experimenters, depends critically on the sitter's ratings. This allows subjective validation5 and uncontrolled rater biases to enter the picture on the side of the mediums.
VERITAS: In a general sense, Hyman's conclusion here is true. Subjective validation and uncontrolled rater biases will contribute to some degree to the clustering of the scores. But do these factors account for the total clustering of the scores? Should we follow Hyman's simple strategy and throw out all the sitter's scores because a subset of their individual item ratings require interpretation and judgment (and hence personal judgment or "bias")?
Metaphorically, Hyman would have us throw out the baby with the bath water. The deep question is, what's more important, the bath water (Hyman's selective focus) or the baby (the data ignored by Hyman)?
The sitters were deliberately selected because they were already disposed towards the survival hypothesis.
VERITAS: Yes, it is true that most of the sitters we have tested to date are open to the survival hypothesis, if not believe in the hypothesis. This is why we carefully grill them about which ratings are scored "objectively" (e.g. initials, names, causes of death, historical facts) and which are scored "subjectively" (i.e. require interpretation on the part of the sitter. This is one reason we are conducting double-blind studies. Being open to the survival hypothesis is not an excuse to dismiss all the single blind data employing these sitters. Ratings of initials, names, causes of death, etc, transcend personal beliefs in philosophical questions - they can be scored accurately whether one strongly believes in the survival hypothesis or is an extreme disbeliever in the hypothesis.
Given the statement "some type of father image," the sitter easily fit this to her late husband who was the father of her children. For her, this would get the highest accuracy rating. A more skeptical sitter, realizing the ambiguity in the statement, might give it a lower rating.
VERITAS: This means that Hyman would rate "father-like" as less accurate in this case, even though the cluster was Big H and HN sound that fits his father quite well. Hyman would rather question the cluster as being a cluster than view the cluster as a pattern.
Note that ratings of "accuracy" are different that ratings of "specificity", ratings of "clarity," or ratings of "meaningfulness." For the record, we are now requiring that our research sitters rate every item on four dimensions rather than one. In this way sitters can distinguish between the accuracy of a statement and its generality / specificity. Hopefully this will enable skeptics and believers alike to discern these fundamental differences.
Given the statement "showing me the month of May", the committed sitter would rate it accurate because her husband actually died in the month of May. A less committed sitter might rate it as less accurate because she realizes that this statement could apply to any significant event that happened to her husband, herself, or her family in May. From the example above, if I were a committed sitter receiving the same reading, I could see myself giving it a score of 5 out of 5 (or 100% accuracy) because my father (obviously a type of father image), experienced one or more significant events in May (Showing me the month of May), was a large and overweight and named Hyman (about the Big H-um, the H connection...an H with an N sound ).
Compare this with the task confronting the control subjects. They would be given a series of questions based on this reading which might go as follows:
1. What was the relation of the deceased to the sitter?
2. What was the name of the sitter's husband?
3. In what month did he die?
4. How was he described by his friends?
VERITAS: Hyman is very confused here. Our stated purpose was to test the control group's ability to guess the information, remembering that it was the mediums who generated the father-like, Big H, HN cluster in the first place. We were not testing for rater-bias, we were testing to see if lay subjects could generate information that fit the sitters as well as the mediums did.
Our question was not "Does this information fit the control subjects?" but rather "Can control subjects generate the information in the first place?" Could the control subjects generate accurate clusters of information, like the mediums' did, by guessing alone? To repeat, can control subjects guess information as well as mediums generate information? The answer was no.
The control students would have to come up with the answers Husband, Henry, May, and Big to get a perfect score. The likelihood of anyone, including the mediums, getting all these correct, or even a high percentage of them correct, is very small indeed.
VERITAS: The likelihood is very small for most people, but it happens quite frequently for the mediums we have tested. Remember, the mediums generated this information is the first place. That is why we continue to publish the findings generated by the mediums.
Mediums generate surprising accurate clusters of information, even under double-blind condition. Even cold readers admit that they can neither guess nor reason their way to creating such data, especially under conditions where they do not know the identity of the sitters and they are not allowed to interact with them.
It is obvious that this a completely different task from the one performed by the mediums. A strikingly obvious difference is that the sitter's judgments and biases are completely removed from the task given the controls. Indeed, what obviously cries out for controlling is just these potential biases and subjective judgments being made by the sitters.
VERITAS: Note Hyman's strong language "obviously cries out." Hyman does not understand that one does not "control" for people making judgments about their memories for their histories - what one does is add double-blind conditions to verify if ratings of items like the initial M, Michael, and shot himself, or father-like, Big H, and HN, continue to be highly rated. This is the scientific question.
One way that Schwartz assesses the likelihood that his mediums are obtaining their `hits' just by chance guessing is to calculate conditional probabilities of getting a certain pattern of statements that would match the sitter's situation. In the excerpt from the reading I have been using as an example, he might estimate the probability of getting the gender of the sitter's husband as 1/2; the probability of indicating that he was dead as 1/2; the probability of correctly guessing that deceased person was the sitter's husband as, perhaps, 1/6; the probability of guessing the month of death as 1/12; the probability of getting the correct name as 1/15; and the probability that of knowing that he was described by friends as "big" as 1/20 (of course, the particular probabilities being made in most of these cases have to be based on assumptions and guesswork, but Schwartz claims that he errs on the conservative side in making such estimates).
VERITAS: Hyman ignores the fact that in some of our papers, we have gone to actuarial tables to actually calculate probabilities, or collect empirical data to determine relative percentages found in populations available to us. "Assumptions and guesswork" is a pejorative and inaccurate description of the efforts we make when we consider and calculate conditional probabilities.
The combined probability of correctly getting this particular pattern of matches just by chance would simply be the product of these separate probabilities. In my example, the probability of achieving this particular pattern of matches would be less than 1 out of 86,000.
Such a low probability would seem to clearly rule out chance as an explanation for the results. Most of Schwartz' actual calculations typically lead to probabilities of less than one out of a million or even millions.
VERITAS: Hyman forgets to mention that we have calculated conditional probabilities mostly on "white crow" readings that are, by definition, examples of "extraordinary readings." We clearly indicate when readings are "typical" and when they are "extraordinary."
In one case he calculated the probability that the results could have been obtained by guessing as 1 in 2.6 trillion! If these calculations were appropriate they certainly would clearly rule out guessing as an explanation for the mediums' apparent successes.
Probability, however, is a very slippery concept.
VERITAS: Hyman says "a very slippery concept." Of course, probabilities must be viewed with caution. This is why we try to be conservative when we estimate them. And yes, some people will use probabilities to make a point that is questionable or even false. Is Hyman implying that we are trying to be "slippery"? The fact is, we are simply using conventional conditional probability estimates and sharing what the outcomes are.
Even experts have gone badly astray in trying to apply it to situations in the real world. Some of the reasons why Schwartz' conditional probability calculations are inappropriate and misleading in this context involve highly technical considerations concerning conditional probabilities, independence, sample spaces and the like.
VERITAS: Again, Hyman makes an over generalized statement, this time implying that our use is seriously tainted by such complications. Since Hyman's goal is to dismiss conditional probability estimates as meaningless in this context, he will throw out such unsubstantiated claims.
However, you can realize something must be wrong here, when you consider that these same types of calculations also provide very low probabilities for any set of matches that any person-the sitter or someone else-finds in a given reading.
VERITAS: Hyman says here "any set of matches." This is an extreme and misleading statement. It shows Hyman's failure to understand how the statistics were actually applied, and what their purpose was.
For example, the pattern of matches that I find in the sample reading with respect to my late father yields a probability of guessing that is so low as to also rule out chance.
VERITAS: True - it would be very low - for the subset of items Hyman has selected that happen to fit him. However, the question is, how frequent in the population is a father-like figure (ignoring other facts that indicate that the medium was referring to a husband, not a father), a Big H, with an HN sounding name? If one calculates the odds, it is indeed infrequent. Note that the actual problem here is not with the statistics. The problem is in Hyman's selectively picking which part of the data he wishes to calculate, and which part (the larger part) he chooses to ignore.
And this will be true for any pattern of matches that any one can find in the same reading. One problem is that Schwartz' calculations do not take into account the enormous variety of possible combinations that could be extracted from a single reading. Each one would be unique to the person for whom that pattern makes sense.
VERITAS: Hyman is implying, somehow, that any person, selecting any cluster, will by chance get a high conditional probability match. This is statistically ridiculous. The scientific question is to discover the frequency of matches, not just that there are statistical matches. Of course, if you selectively pick subsets of matches that fit, the statistics will by definition by high. However, this was not the procedure or purpose of our analyses. Hyman makes a false statement about our purpose and our procedure.
Ironically, such conditional probability calculations could be justified (with some important reservations) for the task given to the control students. Each question they were posed has an explicit answer. If we can make reasonable assumptions about the probability of getting each answer just by chance, and if we can assume that the answers to each question are independent of each other, then we might legitimately try to estimate the probability of getting all the answers correct by multiplying together the probabilities of correct answers for all the questions. Notice that we can do this only because we defined the total set of possibilities and have not selected , after the fact, just those questions that were answered correctly.
VERITAS: Hyman should read, carefully, the Canyon Ranch experiments, where we explicitly had raters score clusters of three or more consecutive correct answers, and then we compared the frequency of such clusters in their ratings of control reading. The findings are clear. The actual readings have more clusters than the control readings. Of course, since this was done single-blind, Hyman will choose to dismiss all the data - in fact, he fails to mention these analyses altogether.
Reliance on Uncorroborated Sitter Ratings
This discussion of the reasons why the control comparison and the calculation of conditional probabilities are inappropriate point to one of the most serious weaknesses in this research program. The `accuracy' ratings of the mediums depends entirely upon the judgments of the individual sitters. Each sitter is solely responsible for validating the reading given to him or her.
VERITAS: Hyman ignores how, from the very beginning, we would ask sitters to indicate whether any other living relative or friend could confirm their ratings, item by item. Hyman implies that we were unaware of this potential problem - when in fact, we were sensitive to it from the outset. What we did not do was typically check to see if the sitters were lying. More on this below.
Each sitter is carefully chosen to be someone who is favorably disposed to the survival hypothesis and who wants the medium to be able to communicate with their departed family and friends.
VERITAS: This is a completely false statement by Hyman. The Miraval Experiment clearly states that we selected ten people who varied in their belief in the survival hypothesis, including two strong skeptics.
If I were to use Hyman's verbiage, I would say "Over and over, Hyman either gets the facts wrong, or ignores them."
I would prefer to say that "In many instances, Hyman presents false or erroneous information or omits important information that are inconsistent with his conclusions."
Schwartz admits that the `accuracy' ratings from sitters who are not so favorably disposed is much lower. Although this is consistent with rater bias, Schwartz has other explanations.
VERITAS: The fact is, I consider alternative hypotheses, ones that happen to be more consistent with the totality of the findings.
For example, if the findings were to be explained entirely by fraud or cold reading, mediums should perform just as well reading skeptics as they do reading believers - that is, so long as the skeptics do not lie (or are so biased themselves that they ignore facts that can be independently verified, (e.g. the initials and names of deceased loved ones).
The fact that mediums appear to do worse reading skeptics actually questions the fraud and cold reading hypotheses. Skeptics often fail to see this inconsistency in their logic.
He also believes that just as some mediums are "white crows," there are also sitters who are "white crows"-that is, some sitters are prone to get especially good results.
VERITAS: Hyman knows very well that I avoid using the word "believe." I have told him repeatedly - as I do publicly in lectures - that I am not interested in belief, I am interested in data. What I say is that data indicate that some mediums do very well, and they do very well with certain sitters more than others. This is a not a belief. It is an empirical observation derived from experiments.
In other words, some sitters are more prone to give higher ratings of accuracy than do other sitters.
VERITAS: This observation also occurs under double-blind conditions, which is the "gold-standard" experimental design that controls for certain "biases" as explanations for the results. Hyman chooses to dismiss the double-blind data as well (see below).
One simple explanation, consistent with Occam's Razor, is that some sitters are more susceptible to response biases. Schwartz, I am sure, will strongly disagree.
VERITAS: Hyman says "Schwartz, I am sure, will strongly disagree." Again, Hyman makes an extreme statement, since Hyman is "sure" what I will think or do. Hyman is again wrong.
The fact is, I strongly agree that one plausible reason for individual differences in sitter ratings is individual differences in rater bias. It is an explanation that must be considered.
Where we disagree is whether this "simple explanation" that Hyman says is "consistent with Occam's Razor" actually accounts for all the experiments observing individual differences in sitters. The fact is, this simple explanation does not account for all the findings. And certainly, the rater bias question does not account for the totality of the mediumship findings.
Occam's Razor is meant to address the simplest explanation that accounts for the largest amount of the data. Hyman is ignoring the critical point of "largest amount of the data" when he mentions Occam's razor.
This, again, highlights the need for properly conducted research that precludes or adequately corrects for such possible biases. This is why a properly conducted research program requires carefully standardized, reliable and valid dependent variables; truly double blind procedures; appropriate control comparisons; and proper controls for sensory leakage. All of these requirements, as I have explained, are lacking in The Afterlife Experiments.
VERITAS: Again, we see Hyman making an over generalized statement. The implication is that each and every experiment suffers from all of these possible areas of concern. Remember, Hyman ignores the early double-blind experiments (which address these areas of concern), and he dismisses the latest double-blind studies that again address these concerns. He implies that we are unaware of these concerns, and that we do not know how to address them. Hyman's impression is as false as his being "sure" that I would disagree with the fact that rater bias needs to be considered as a possible hypothesis for individual differences in rating performance.
Schwartz has tried to counter some of these criticisms by pointing to the fact that much of the information provided by the medium consists of factual material that can be independently checked. For example, specific names, relationships, careers, gender, etc. Yet, he has never bothered to make an independent check on these "facts." He simply accepts the sitters statements. He argues that it is completely unreasonable to believe that one of his trusted sitters would say `Yes' to a fact that was untrue.
VERITAS: The language "completely unreasonable" reflects Hyman's extreme language, not mine. I have never said that every one of my sitters must be telling the truth when they rate every single item. In fact, if Hyman asked me, I would give him instances where sitters have "lied" in some of their ratings to keep certain information private, out of the public record.
However, what I have said, is that based upon our sitter selection process, the visibility of the experiments, and the need for integrity, that I have trusted a sitter when she says "my husband's name his Henry" in the same way that I trust Hyman when he says "my father's name is Hyman." We do not "trust" sitters blindly - we do so quite carefully, with discernment.
This, of course, is using a plausibility argument in the place of a control that should have been incorporated into the research. Perhaps, it is unlikely that a sitter would acquiesce to a factual statement that she or he knows to be untrue.
VERITAS: "Perhaps, it is unlikely…." is a curious statement made by Hyman. Is it "perhaps, unlikely" that Hyman will lie that his father's name is Hyman? Or, is it more accurate to say "highly improbable" that Hyman will lie that his father's name is Hyman?
Of course, Hyman could be lying about his father's last name. In fact, if we are to treat Hyman's review, in total, as evidence of Hyman's memory and integrity, the data could be interpreted as being consistent with the hypothesis that Hyman sometimes lies and has false memories at other times.
NOTE: I am not attempting to be sarcastic here, I am being scientific - i.e. letting the data speak. If data indicated that a sitter was claiming that I had been at Yale for 28 years, or that I had not conducted double-blind studies until late in my mediumship research program, I would have the responsibility to question her or his memory and judgment.
However, his own excerpts from readings given in this book provide one or more examples. In one case, one of his best sitters keeps acquiescing to John Edward's mistaken belief that her husband is dead, even though he is alive and sitting in the next room. As he does, over and over again when he encounters what looks like a miss, Schwartz manages to find a convenient explanation to this peculiar situation. He suggests that this could be case of precognition because the sitter's husband was killed in an accident some month's after the reading.
VERITAS: The truth is, the sitter claimed that she actually consciously lied to John (on tape) in the initial reading concerning her husband. In addition, the truth is, the sitter claimed to have had a history of experiencing pre-cognitive-like phenomena, a history which was confirmed by her husband and friends. The sitter claimed that she suspected that her husband might die before her reading with John. This is the history. The sitter did not want this information conveyed publicly.
NOTE: This was not me "managing to find a convenient explanation" as Hyman states. This was me accurately reporting the facts as conveyed by the sitter and family. Hyman should resist putting false words in my mouth and false intentions in my head. The accumulation of false information conveyed in this review is substantial.
The Laurie Campbell "White Crow" Readings
The book begins with a quotation from William James. "In order to disprove the law that all crows are black, it is enough to find one white crow." James was interested in the possibility of psychic phenomena. He believed that it was sufficient to find one truly indisputable example of a psychic occurrence to demonstrate that violations of natural law were possible. Schwartz claims he has uncovered several white crows.
VERITAS: What I state is that the data are consistent with the conclusion that we have found several white crows. Those are the data.
The performance of his mediums, especially Laurie Campbell and John Edward, earn them the accolade, in his judgment, of "white crow" mediums. He has also found at least one "white crow" sitter in the person of GD.
VERITAS: Note - we now have observed additional "white crow" sitters in subsequent double-blind experiments.
GD is a psychiatric social worker who lost his partner, Michael, to aids. GD discovered he had mediumistic powers and believed he was in contact with his deceased partner. He took part as one of three sitters in an experiment with the medium Laurie Campbell. The researchers reported that, "Statistically significant evidence for anomalous information retrieval was found for each of the three sitters investigated in this experiment. However, it is the uniqueness and extraordinarily evidential nature of the particular reading highlighted in this detailed report that justifies focusing on this `white crow' research reading." In other words, the researchers base their report entirely on the results with this one sitter.
VERITAS: Extreme language saying "the researchers base their report entirely on the results with this one sitter" misses the simple point that this was published as a" case study." A case study, by definition, is based upon a single subject.
Of course, the statement "one white crow" is by definition a single case - hence the obvious and appropriate parallel to William James.
Although one of the criteria for the selection of the sitters was their willingness to rate the transcripts of their readings, such ratings were apparently not done at the time this report was written. The experimenters report that GD estimated that the information given by the medium was at least 90% accurate. Presumably this was simply a subjective estimate. In the previous experiments the "accuracy" rating was obtained by calculating the proportion of highly rated items among all of the rated items.
VERITAS: The purpose of the report was not to list every item in every section of the reading. It was to focus on the clear collection of dazzle shots. Hyman is correct that this figure was an estimate in this particular paper, just as we said it was.
Schwartz et al state that the complete reading took over an hour. They promised that the full transcript will be made available at some future date. So far, I have not seen it. So I cannot judge to what extent this reading might be qualitatively different from the readings that I have witnessed or read that have been given by Laurie Campbell (LC).
VERITAS: For reasons of privacy (ever-stricter University guidelines for the protection of human subjects), coupled with our discovery that ultra-skeptics selectively pick information that fit their disbelief in the phenomenon, and ignore important information that challenge their disbelief, we have not made the complete reading available to the general public. Instead, we provide important examples.
In the readings I am familiar with, Laurie Campbell throws out initials, names, and vague statements that appear to me to characterize the readings from the many psychic readers and mediums I have studied over the past 60 years. I witnessed a public demonstration by her at a conference sponsored by Gary Schwartz and Linda Russek in Tucson in March, 2001. I have also carefully studied the complete transcripts of two readings by LC.
VERITAS: Hyman uses the phrase "throws out" as if Laurie Campbell is simply guessing information. In the public group reading she did that Hyman witnessed (the first she had ever done - though Hyman fails to mention this fact), the observation "throwing out" could be applied. However, in the reading reported in detail (but not verbatim) in the published paper, the cluster of initials and names was striking.
If Laurie was guessing, it was a truly outstanding cluster of guesses (as indicated by the conditional probabilities that specifically applied to this white crow reading).
At first blush the reading given for GD appears qualitatively different. From what we are told, LC apparently stated that the recipient of the reading was named George (true) even though she was supposedly completely blind to his identity.
VERITAS: By all accounts, including Hyman's own judgment (see below), Laurie was blind to the identity of the sitter.
She also correctly indicated that the primary deceased person for GD was a male named Michael (true). She also provided the name "Alice" and later, during the interactive part of the reading, correctly stated that this was GD's deceased aunt. Among the list of names she included in her reading was one that she said sounded like "Talya," "Tiya," or "Tilya." GD has a friend that he calls "Tallia." LC mentioned a deceased dog whose name began with an "S." GD had a beloved dog with an "S" name (but not the name used by LC). Other names were also relevant including that of GD's father "Bob." The researchers cite other qualitative hits that they believe provide powerful evidence that LC is getting information from a paranormal source.
VERITAS: This is a reasonable summary of some of the main findings.
This paranormal source, the authors argue, is not simply extra-sensory perception based on GD's thoughts. This is because in the interactive phase of the reading "not only were each of the four primary people described accurately by LC, but four additional facts not known by GD and later confirmed by sources close to GD indicated that exceptionally accurate information was obtained for GD's deceased and close friends." Because of this, Schwartz argues that the medium is most likely getting her information from the deceased individuals rather than from the sitter's thoughts.
VERITAS: More precisely, what we propose is that the survival hypothesis is the simplest and most parsimonious explanation that accounts for the totality of the data, including these four important pieces of information.
At the time of the reading, GD mistakenly thought that LC had erred by stating that the granddaughter of his aunt Alice was named "Katherine" because he believed the name was spelled "Catherine." When GD later checked, he discovered that his cousin's name was indeed spelled with a K instead of the C that he was thinking during the reading. Another striking example is where LC said "that M showed her where he lived; somewhere in Europe, and his parents have a `heavy accent' (M was German). LC reported that M was showing her a big city, and then M was traveling through the countryside to his home...LC claimed that M showed her an old, stone `monastery' on the edge of the river on the way his parent's home. This information was not known to GD prior to the reading. After the reading, GD telephoned M's parents in Germany and learned that there was an old abbey church along the river's edge on the way to their house, and that they had held a service for M in this monastery-like stone building a few weeks prior to the experiment."
These are examples from this reading that Schwartz insists that the skeptics cannot explain away in terms of normal causes such as guessing and cold reading, fraud, or unwitting sensory leakage. However, the experiment is compromised by so many serious defects that it would be futile for a skeptic to accept this challenge.
VERITAS: Hyman's extreme use of language - "so many serious defects that it would be futile..." - implies that skeptics should simply dismiss the total set of findings reported in this paper. Is the extreme conclusion justified by the design of the experiment?
Again, metaphorically, Hyman can wave his over generalization magic wand and make believe that the total set of findings will go away - but this is not science, it is bias.
This would be another example of placing the burden of proof on the wrong shoulders. Although the experimenters try to make a plausible argument against collusion between LC and GD, as well as against the possibility that LC might somehow have gotten access to the manuscript of GD's forthcoming book (a copy of which was in Schwartz') possession, the actual controls against such sensory leakage were not very convincing.
VERITAS: The issue is not whether Hyman is 100 percent convinced, the issue is whether there is any plausible evidence that sensory leakage can explain the totality of the findings (including information that GD himself did not know). The truth is, there is none (as Hyman acknowledges below).
Indeed, the authors partially acknowledge this defect. "Since the exceptional nature of the data reported here was not anticipated ahead of time, the experiment did not include additional desirable controls..." Although I see no reason to assume that fraud did occur in this instance, I believe that the experimenters have an obligation to their mediums and sitters, as well as to the scientific community, to take all reasonable steps to preclude fraud as a possibility. By taking such steps they protect their subjects from any suspicions that might arise in this area.
VERITAS: Because we are conservative, we point out that the study was not perfect. However, we took very plausible and careful steps to insure that fraud was not involved in this experiment - as the paper documents - and our subsequent experiments were even more "fraud proof."
Hyman admits "I see no reason to assume that fraud did this occur in this instance." So what justifies his severe dismissal of the findings? One hypothesis is a philosophical bias that taints his reading and reporting of the facts.
The results would have become more interesting if they had been collected under double-blind conditions-that is, under conditions where LC, GD, and the experimenter, Schwartz, were all in ignorance of one another at the time of the reading. Schwartz calls the experiment "single-blind" because at the time of the reading (at least the first portions of it), GD did not know who the medium was and LC did not know who the sitter was and was separated from him by a thousand miles. Unfortunately, the experimenter, who did know the identity of the sitter as well as quite a bit of his personal history, was with LC at the time she was giving much of the reading. Psychical researchers have a long history of dismissing data collected with this weakness as non-evidential.
VERITAS: What is correct to state is that a subset of parapsychologists will dismiss all experimental findings as "non-evidential" if they are not collected double-blind. Hyman does so devoutly. He would rather simply dismiss the data, even though he sees "no reason to assume that fraud did occur in this instance" rather than entertain them.
My colleagues and I are agnostics in principle. We present the data, we point out their limitations and their potential importance, and we encourage more research, We state the current status of which hypotheses account for the largest amount of the data, and we appreciate that future research could shift the evidence toward another hypothesis.
Probably the most serious weakness of this experiment is that its outcome relies entirely upon the uncorroborated judgments of the sitter GD. Again, Schwartz relies on plausibility arguments for the reliability and validity of GD's ratings of the reading. This is a major defect for many reasons. One is simple rater bias.
VERITAS: GD is a poor case for Hyman to pick on as confounded by possible rater bias. GD has detailed records, independent reports, photos, etc…. supporting his judgments. If Hyman had read GD's book, he would know that GD has the records.
Individuals can differ widely as to what they will or will not accept as valid for their personal situation. When LC says that she is hearing a name that sounds like Talya, Tily, or Tilya a sitter with a strict criterion might not accept this as referring to a friend whose name is Tallia.
VERITAS: We conducted an experiment to see how many people know someone with a name sounding like Talya. The percentage we obtained was approximately 2 percent. By itself, this finding is merely interesting. However, when the 2 percent figure is combined with George, Michael, Alice, Bob, Jerry, etc., the name cluster - in total - becomes very important - especially since JD expressly invited Michael, Alice, Jerry, and Bob to "come to the reading."
On the other hand, a sitter with a looser criterion and who is convinced that the medium is talking about his situation, might accept LC's probe as referring to a friend with the name of Tanya, Tina, Tilda, Tony, Dalia, Natalie or a variety of other possibilities. Schwartz may be right that it is unlikely that GD would misremember or misreport having a friend by the name of Tallia.
VERITAS: But GD has the data. Hyman should not dismiss these particular findings.
However, if the outcome of this reading is so earth-shaking and scientifically revolutionary as he claims it is, I would think that he should at least make the effort to independently check on some of these facts.
VERITAS: The truth is, we did check many of the facts. We obtained photos, written records, and other confirmatory data from GD. We even mentioned this fact at the conference Hyman attended. Hyman is simply wrong in assuming, and stating, that we did not do so for this important white crow case.
This is especially true for "facts" that were unknown to GD at the time of the reading, but were later discovered by him to be true. For example, when GD called M's parents in Germany, how did the questioning take place? Did they speak in German or English? How well does GD speak German? How well do M's parents speak and understand English? Did GD ask the questions in a leading way? Certainly it would have been highly desirable for the experimenters to have independently communicated with the M's parents. Indeed, it would have been better if they, rather than GD, did all the checking. Instead, everything depends upon GD. Such reliance on a single individual in such circumstances is called by psychologists "the fallacy of personal validation."
VERITAS: Yes, it would have been more experimentally perfect if we had called the parents instead of GD. This is why we said the extraordinary nature of the findings were unanticipated. We reported the data as they occurred - we did not hide the discovery nature of this experiment. However, Hyman has no justification in dismissing GD's reports in light of all the photographs, writings, and other hard evidence that validate GD's reports.
"Replication" of the LC/GD Reading in a Double-Blind Experiment
What is required, of course, is a successful replication of these apparently spectacular results in a reading conducted under properly double-blind conditions. Indeed, this is precisely what Schwartz claims he has achieved.
VERITAS: The paper that replicates these "apparently spectacular results" will appear in the April 2003 issues of the Journal of the Society for Psychical Research.
He and his colleagues finally conducted a double-blind experiment using LC as the medium and six sitters, one of whom was GD.
VERITAS: As mentioned previously, Hyman ignores the original double-blind studies with Laurie Campbell (since he claims to have read the book). Hyman's stated "finally conducted a double-blind experiment" is both pejorative and false.
During the readings, LC and the sitters had no contact and the two experimenters who were with LC were blind to the order in which the sitters were run. Later each sitter was sent two transcripts to judge. One was of the actual reading for that sitter and the other was of a reading given to another subject. The sitters were given no clues as to which was their actual reading. "The question was, even under blind conditions, could the sitters determine which of the readings was theirs?
"The findings were breathtaking. Once again it was George Dalzall's reading [that] stood out...This provided incontrovertible evidence in response to the skeptics' highly implausible argument against the single-blind study that the sitter would be biased in his or her ratings (for example, misreading his deceased loved ones' names and relationships) because he knew the information was from his own reading.
"The skeptics' complaint becomes a completely and convincingly impossible argument in the case of the double-blind study.
"It appeared to be the ultimate `white crow' design."
As these quotations reveal, Schwartz believes this double-blind experiment has put to rest all the skeptical arguments against his evidence. One of Schwartz' mantras in relation to his afterlife experiments is Let the data speak. When I read the full the report7 of this "ultimate `white crow' design," the data did speak loud and clear. However, the story they told is just the opposite from the one that Professor Schwartz apparently hears.
VERITAS: In our book The Afterlife Experiments, written for the lay audience, we used some strong language when we described this - precisely because the replication was so strong. However, Hyman's choice of words "is just the opposite" assumes that Hyman is reading and reporting the replication findings accurately and comprehensively. Sadly, he is not.
The plan of the study was admirably simple. LC gave readings to the six sitters in an order that neither she nor the experimenter who was with her knew. In this way neither the medium nor the person in her presence was aware of who the sitter was at the time of the reading.7 At the time of the reading, the sitter was physically separated from the medium. The medium gave her readings in Tucson, Arizona while the sitters were in their homes in different parts of the country. Subsequently, each sitter was mailed two transcripts. One of the transcripts was the actual reading for that sitter and the other was from the reading of another sitter. Each sitter rated the two transcripts, not knowing which was the one actually intended for her or him, according to instructions provided by the researchers. The sitter first circled every item in the transcripts which they judged to be a "dazzle shot." "For you, a dazzle shot is some piece of information-whatever it is TO YOU, that you experience as `right on' or `wow' or `that's my family.'" Next, the sitter was instructed to go through the transcripts again and score each item as a hit, a miss, or unsure.
Finally, the sitter designated which of the two transcripts was the one that actually was intended for him or her.
VERITAS: This is an adequate summary of the rating procedure.
The hypothesis was that if LC could truly access information from the sitter's departed acquaintances, this would show up on all three measures. In other words, the sitters would successfully pick their own reading from the two transcripts; they would record significantly more dazzle shots in their own transcripts as compared with the control transcripts; and they would find many more hits and fewer misses in the actual as opposed to the control transcript. Each one of these three predictions failed. Four of the sitters did correctly pick their own transcript, but this is consistent with the chance expectation of three successes.
VERITAS: For an n of 6, only 6 out of 6 would reach statistical significance using a binary, yes/no judgment. However, if the trend that was observed continued, an appropriate n of 25 subjects would reach statistical significance for 66% binary detection. We point this out in the discussion. Hyman ignores this fact. Moreover, the curves for all three measures are clearly in the predicted direction.
On the two more sensitive measures, there were no significant differences in number of dazzle shots or hits and misses.
VERITAS: However, what Hyman dismisses is that the pattern of results was in the predicted direction for all three measures, and would indeed be significant with a reasonable sample size if the trend continued - which is the scientifically accepted basis of estimating statistical power and sample size.
The authors admit that for the overall data, "there was no apparent evidence of a reliable anomalous information retrieval effect."
VERITAS: Remember, Hyman ignored the subsequent sentence about (1) how a larger n would be significant, and (2) that GD was the one subject who, as predicted from the single-blind experiment, showed the most dramatic findings.
So how come they use these results to proclaim a "breathtaking" vindication of their previous findings? This is because, when they looked at the results separately for each sitter, they discovered that in the case of GD, who had been the star sitter in a previous experiment with LC, he not only successfully identified his own transcript but also found 9 dazzle shots in this transcript and none in the control. The results for the hits and misses were equally striking. He found only a few misses in his own transcript and a large number of misses in the control. He found many hits in his own transcript and not a single one in the control transcript. Given this "unanticipated replication," the authors hail the results as compelling support for their survival hypothesis.
VERITAS: Hyman's language "hail" is certainly not our language. What we do is point out that the prediction formulated from the single-blind white crow experiment was now replicated in a double-blind experiment, and that this replication is theoretically and practically important.
However, for anyone trained in statistical inference and experimental methodology, this will appear as just another blatant attempt to snatch victory out of the jaws of defeat.
VERITAS: Language again. Hyman says, "Just another blatant attempt to snatch victory out of the jaws of defeat." However, Hyman fails to recognize that if GD had done just as poorly as the other subjects, we would have reported that positive results were not obtained under these experimental conditions. We report the data as they appear, not as we hope they will appear.
The issue is not "victory" or "jaws of defeat" - the issue is, what precisely is the phenomenon? Are their reliable individual differences in reading sitters, or not? The data suggest that individual differences in reading sitters exists.
An accepted principle of research methodology is that the reporting of statistical significance from experimental findings derives meaning from the fact that the experimenter specifies in advance which comparisons he or she will test. If the experimenter plans to make many comparisons, then the criteria for statistical significance must be adjusted to take into account that the more comparisons that will be made the more chances there will be to find something "significant" just by chance. In the present case, it was obvious that the planned comparisons involved the overall differences between the ratings of the actual and the control transcripts. The authors do not indicate whether they intended to make adjustments for the fact that they were using three different measures, but, in any case, it does not matter because there were no meaningful differences on any of the three indicators.
VERITAS: Once again, for a small n study, especially a double-blind one, the sample size limits the power. We included GD precisely because we wanted to see if his previous extraordinary data would be replicated. They were. This prediction was "specified in advance."
Of course, these strictures do not preclude the investigators from noticing unexpected outcomes in their data. Such unplanned outcomes can serve as hypotheses for new experiments. When an experimenter finds unanticipated, but interesting, quirks in the data, he/she cannot draw conclusions until the surprise finding has been cross-validated with new data.
VERITAS: Hyman is very confused here. Let me dispel the confusion.
GD's "spectacular" performance for the single-blind study was unexpected, and we reported it as so.
However, GD's performance for the double-blind study was predicted, and we reported it as so.
GD was included in the double-blind study as a replication of the single-blind study.
The double-blind study was a "cross-validation" of the single blind study. It addresses Hyman's unfounded fears of rater bias in this particular subject.
The reason for this is simple. Any set of data that is reasonably complex will always, just by chance, display peculiarities. Some statisticians and methodologists do allow testing for unexpected findings by means of "post hoc" tests. Such tests require that the departures be much greater than those needed for planned comparisons before they can be declared "significant." Furthermore, such post hoc tests on specific subparts of the data are typically licensed only when the overall tests are significant, which is not the case for the present situation.
VERITAS: Again, Hyman is confused and misses the important point.
We are not capitalizing on chance. Quite the contrary, we are observing an extraordinary set of findings from a single-blind study for GD, and then discovering that they replicate in a double-blind study.
This was the primary point in reporting the double-blind study.
So, by commonly accepted scientific practice, the experiment has failed to support the hypothesis it was planned to test. Furthermore, because nothing significant was found, the results do not warrant claiming a successful replication of previous findings. For scientific purposes, this is all that need be said.
VERITAS: "This is all that need be said"? Only if one chooses to ignore GD's single-blind data, followed by his double-blind data. This is what Hyman does.
However, it may be edifying to discuss some additional reasons why the claim for a successful "replication" is highly suspect in the present case. Three of the six sitters for this experiment were selected just because LC had provided "successful" readings for them in previous experiments. They were included to see if she could do so again. For two of them, the authors admit that she failed. So, it is only for GD that, in their view, did she apparently succeed.
VERITAS: Note - only GD was a super-star, the other two were merely "successful." Hyman fails to remember why we used the term "white crow" in the first place. Had the other two sitters been white crows, we would have reported "three white crows."
We used the "one white crow" terminology very carefully in the case of GD. GD was the super-star in those two studies.
Comparing the two readings that LC gave GD, I find little too support the claim that the second one replicates the apparent success of the first one. Although a full transcript of the first GD reading is still not available, what was included in the first report strongly suggests that the second reading cannot be considered to be aimed at the same individual for whom the first one was given. GD's major interest in mediumship is to establish contact with his deceased partner Michael. LC is given credit in the first reading for stating that there was a deceased friend named Michael and then later that he was the primary person for this sitter. The name Michael or a deceased partner does not come up in the second reading. Ironically, the name Michael does appear in the control reading. In the first reading LC mentions a strange name that sounded like "Talya," "Tiya," or "Tilya." GD stated that he indeed had a friend (living) named Tallia. No such name appears in the second reading. Indeed, of the 20 names LC produced in the first reading only three come up in the second reading, and these are such common ones as George, Robert or Bob, and Joe or Joseph. In none of these three cases does she identify whether the person is living or dead or what relationship he has to GD. None of the "specific" facts that she apparently stated during the first reading come up in the second one.
VERITAS: Readers who are not selective when they carefully examine the published report will note that we point out the fact that substantially different information was obtained in the double-blind reading for GD.
This is curious because if Laurie was somehow cheating, and she wanted to show a simple replication, she would have repeated a significant portion of information obtained in the single-blind reading.
What is quite interesting is that highly discernible information, rated under double-blind conditions, was distinguished between the two transcripts for GD. This removes the possible argument that GD was recognizing the information from the previous experiment. Hyman ignores this important fact.
Schwartz claims that the rater bias could not have affected the ratings of this double-blind experiment.
VERITAS: Hyman takes this comment out of context. What I was saying was that the arguments used by Hyman for justifying double-blind procedures remove those rater-bias concerns that are present in single-blind studies (e.g. prior knowledge that a given reading belongs to the sitter).
A look at GD's dazzle shots and his discussion of the hit and miss data suggests otherwise. His first dazzle shot is "Bob or Robert." These names occur early in the reading in a statement that goes, "And then I could feel like what I thought was like a divine presence and the feeling of a name Mary or Bob or Robert." This appears in a context with other names and other general statements, none of which even hint of a father. The second dazzle shot is "George." Again this appears in a context with no hint that this could be referring to the sitter. LC states, "I got like some names like a Lynn, or Kristie, a George." His third dazzle shot is the statement, "I had the feeling of a presence of an Aunt." GD identifies this aunt as his aunt Alice, although LC does not provide the name Alice anywhere in the reading. I count at least 27 names thrown out by LC during this second reading. Actually, she covers a much broader range of names because she typically casts a wide net with statements like: "And an `M' name. More like a Margaret, or Martha, or Marie, something with an `M.'" It is up to the sitter to find a match . As indicated by his dazzle shots GD is strongly disposed to do so.
VERITAS: Here is the truth. We encouraged GD to provide his own commentary to help us understand his thought processes. And we convinced the journal to publish his commentary. Hyman should be thanking us for requesting that GD do this.
The fact is, in most double-blind studies, subjects are not requested to explain their answers. If they did, Hyman would end up dismissing their double-blind studies too.
However, because we are truly interested in the phenomenon and process, we keep seeking more information. We encourage sitters and mediums to share what they are doing so that we can better understand the nature of the process.
In his qualitative commentary, GD was obviously influenced in selecting one of the transcripts as his reading because it begins with the statement, "I kept feeling the presence of a male." The control reading happens to begin with the statement, "Now, um to start with I felt like a woman's energy." GD wrote, "I was impressed that the reading is gender specific and accurate..." Instead of assuming that LC was somehow conveying information to GD from his departed relatives, it is just as plausible to assume that once GD decided that the actual transcript was meant for him, then subjective validation took over and did the rest. There is, of course, a 50-50 chance that the actual reading is the one that GD will decide is meant for him. From then on, he would read that transcript as if it were truly describing his departed relatives and reject the other as not relevant.
VERITAS: Note that Hyman is now dismissing most of the highly specific information scored in GD's double-blind scoring as potential rater bias because we were asking the subjects to guess which of two readings belonged to them, and the sitters made hypotheses about which of two readings belonged to them!
The gold standard is double-blind, and now Hyman would have us dismiss double-blind experiments too.
This conjecture fits well with everything we know about subjective validation and the acceptance of personality sketches that one believes was meant for one's self. Is this far-fetched in GD's case? To me, it seems quite obvious just reading the transcript and looking at GD's ratings.
VERITAS: Hyman says "it seems quite obvious" to him that he can easily dismiss the entire set of ratings as rater bias. Hmmm….
The question we should be asking ourselves is, who is engaged in greater rater bias? GD - who provides detailed evidence for his ratings - or Hyman?
The entire case for the reading's validity is based on the assumption that LC is describing GD's summer vacation home on Lake Erie in upstate New York. Given this assumption everything is then interpreted within this context. Of course, LC never states that she is describing a summer vacation home. It is GD who makes this connection.
VERITAS: GD makes this connection from the cluster of specific facts that happen to fit this important vacation home. Hyman should know that humans, especially smart ones instructed to conduct item by item scoring, will generate hypotheses.
As just one of many examples of how GD is creative in making the reading fit his circumstances, he gives LC credit for having identified the color of their summer cottage which was painted yellow with white trim on the windows. LC does, at one point, say, "And I kept getting colors of like yellow and white." This is in a context where she is talking about a woman who spends all her time in the kitchen. One could construe this as perhaps describing the interior colors of the kitchen, the woman's clothing, the old mixer with which she is described as using, among other possibilities. However, the statement is far removed for any mention of the exterior of the house as such. Earlier in the reading she mentions a white house. A little bit further on, she again mentions a house. She immediately follows this with "And I kept seeing the colors of like grays and blues, but that looked real weathered." Obviously, if the house had been gray and blue, LC would have been given credit for a direct hit. GD manages to ignore this and gives LC credit for having correctly described the house as yellow and white.
VERITAS: GD does not ignore data that is incorrect or does not fit. Quite the contrary, he rates it as an error!
It is Hyman who ignores data that does not fit his belief about the possibility that mediums might be obtaining accurate information under double-blind conditions.
Again, I suspect that Schwartz will disagree with my interpretation. After all, he has already gone on record that this study "provided incontrovertible evidence in response to the skeptics' highly implausible argument against the single-blind study that the sitter would be biased in his or her ratings (for example, misrating his deceased loved ones' names and relationships) because he knew that this information was from his own reading." Nevertheless, the data are quite consistent with the possibility that all we have to do to account for his "breathtaking" findings is to assume that they are do to rater bias.
VERITAS: "All we have to do" to account for the findings is "assume that they are due to rater bias." Yes Hyman, you can make this sweeping dismissive assumption, and then throw out the double-blind studies too.
But do all the data support rater bias? No, only some of the data - the rest Hyman ignores.
So what is the bottom line? This book describes a program of experiments described in four reports using mediums and sitters. The studies were methodologically defective in a number of important ways, not the least of which was that they were not double-blind.
VERITAS: The truth is, the book describes some double-blind studies conducted prior to the single-blind studies questioned by Hyman. Hyman ignores these prior double-blind studies and findings. He then dismisses all the single-blind studies because none are experimentally perfect - and none can be perfect -since they are single-blind. Is this an accurate or fair conclusion? Is this an honest way to examine the data and draw a conclusion about nature?
Despite these defects, the authors of the reports claim that their mediums were accessing information by paranormal means and that the application of Occam's Razor leads to the conclusion that the mediums are indeed in contact with the departed friends and relatives of the sitters.
VERITAS: Hyman hopefully read the last chapter of the book. In this chapter, I outlined eleven speculations of the skeptics, eleven interpretations of the mediums, and eleven conclusions based upon the data discovered to date. It was only after going through such a comprehensive and integrative exercise that I came to the conclusion that the totality of the data can most simply and parsimoniously be explained by the survival of consciousness hypothesis. No other single hypothesis (e.g. fraud, or cold reading, or rater bias, or chance) accounts so simply and parsimoniously for the totality of the data as does the survival hypothesis.
This is simply a description of the current state of the research.
Schwartz's demand that the skeptics provide an alternative explanation to their results is clearly unwarranted because of the lack of scientifically acceptable evidence. A fifth report describes a study that was designed to be a true double-blind experiment. The outcome, by any accepted statistical and methodological standard, failed to support the hypothesis of the survival of consciousness.
VERITAS: As mentioned previously, this original, small n double-blind experiment generated data that were consistent with the survival of consciousness, and would reach statistical significance (e.g. p < .01) with a reasonable size n. Moreover, the white-crow reading of GD obtained in a single-blind experiment was clearly replicated - in degree of accuracy - in the double-blind study.
Note - Hyman has not yet read our newest paper that reports on two multi-center double-blind experiments. These two new experiments replicate reliable individual differences in sitters that can be observed across laboratories and experiments. Hyman's extreme statement "by any accepted statistical and methodological standard" - is simply erroneous. The fact is, NIH grant applications require that investigators calculate power analyses and indicate what size n is necessary to obtain statistical significance as a prerequisite for the grant being funded.
Yet, the experimenters offer the results as a "breathtaking" validation of their claims about the existence of the afterlife. This is another unfortunate example of trying to snatch victory from the jaws of defeat.
VERITAS: Hyman's review ends with "another unfortunate example of trying to snatch victory from the jaws of defeat." Is this an accurate summary of the work?
I provide some summary comments below.
Schwartz's Summary: Failure to Integrate Information and to See the Big Picture
In most areas of science, no single experiment is perfect or complete. Different experiments address different conditions and different alternative explanations to different degrees. The challenge is to connect the dots of the available data and integrate the complex set of findings using the fewest number of explanations [i.e. Occam's razor].
Hyman reveals in his review that he learned as a teenager that it was easy for him to fool many people with palm reading. It is also quite easy to fool many people with fake mediumship, as anyone trained in cold reading will tell you. I have studied a number of books on cold reading and have taken some classes on cold reading myself.
However, just because it is possible sometimes to be fooled [especially by the masters of magic] doesn't mean that everyone is fooling you.
Hyman reluctantly agrees that it is improbable that the totality of our findings can be explained by fraud. His preference is to propose that the set of findings collected to date must involve a complex set of (a) subtle cues providing information in some studies, (b) cold reading techniques being used in some studies, (c) rater bias providing inflated scores in some studies, (c) and chance findings in some studies. The idea that mediums might be obtaining anomalous information that can most simply and parsimoniously be explained in terms of the continuance of consciousness is presumed to be false by Hyman until proven otherwise.
The truth is, it is impossible to integrate the totality of the findings in any area of science if one selectively (consciously or unconsciously) ignores those specific findings that do not fit one's preferences or biases.
I admit, adamantly, that I have one fundamental bias - my bias is to discover the truth, whatever it is. Discovering the truth can not be achieved through selective reporting of history, procedures, and data.
The truth is, when the totality of the history, procedures, and findings to date are examined honestly and comprehensively - not selectively sampled to fit one's particular theoretical bias - something anomalous appears to be occurring, at least with a select group of evidence-based mediums.
Over and over, from experiment to experiment, findings have been observed that deserve the term "extraordinary." In our latest double-blind multi-center experiments, stable individual differences in sitters have been observed that replicate across laboratories and experiments. The observations are not going away - even with multi-center double blinding.
Hyman once told me on the phone, "I have no control over my beliefs." When I asked him what he would conclude if a perfect large sample multi-center double-blind experiment was conducted, his response was "I would want to see your major multi-center double-blind experiment replicated a few times by other centers before drawing any conclusions."
This conversation is revealing. Until multiple perfect experiments are performed and published, Hyman would rather believe that the totality of the findings must be due to some combination of fraud, cold reading, rater bias, experimenter error, or chance - even if this requires that he selectively ignores important aspects of the history, designs, and findings in order to hold on to his beliefs.
Why spend the time and money conducting multiple multi-center double-blind experiments unless there are sufficient theoretical, experimental, and social reasons for doing so?
The critical question is, "Is it possible that consistent with the actual totality of the data collected to date - viewed historically as well as across disciplines - that future research may lead us to come to the conclusion that consciousness is intimately related to energy and information, and that consciousness, as an expression of dynamically patterned energy and information, persists in space like the light from distant stars?" This is ultimately an empirical question - it will be answered by data, one way or the other.
If positive data are obtained - and I emphasize if - accepting the data will require that we be able to change our beliefs as a function of what the data reveal.
The Afterlife Experiments book was written to encourage people to keep an open mind about what future research may reveal.
For the record, despite our differences, I respect Hyman's obvious commitment to reviewing this controversial work, even if it is heavily biased by his palm reading history / hard line double-blind / ultra-skeptical orientation.
Hyman knows that the motto of our lab, as mentioned above, is "If it is real, it will be revealed, and if it's fake, we'll find the mistake."
Concerning the mediumship research, Hyman's review indicates that he strongly believes that it must all be a mistake, and that my colleagues and I are seriously misguided in failing to recognize the plethora of mistakes.
However, as I have explained above, Hyman can only hold on to his "mistake" conclusion only if he ignores important historical, procedures, and empirical facts, and then over generalizes and dismisses the information indiscriminately.
The truth is, when the complete set of facts is actually placed on the table, the conclusion is not as black and white as Hyman paints it.
Quite the contrary - anomalous observations in mediumship research appear to be persistent and challenging.
Meanwhile, in the process of addressing the survival of consciousness hypothesis, it appears that we will likely be learning something about the skeptically-biased mind. It appears that Hyman is not aware of the fact that he has made what might be called the ultimate reviewer's mistake.
I, for one, find the skeptical-biased mind hypothesis as intriguing as the survival of consciousness hypothesis.
May research continue, and may we listen to what the data reveal.
* The title of my response follows Ray Hyman's review in the Skeptical Inquirer [Jan-Feb, 2003] of The Afterlife Experiments [2002, Pocket Books / Simon and Schuster] titled "How Not to Test Mediums" which was based upon Martin Gardner's book How Not to Test a Psychic [1989, Prometheus Books]. (Return to text)
** Gary E. R. Schwartz, Ph.D., Linda G. S. Russek, Ph.D., Donald E. Watson, M.D., Laurie Campbell, Susy Smith, Elizabeth H. Smith (hyp), William James, M.D.(hyp), Henry I. Russek, M.D.(hyp), & Howard Schwartz, M.S.(hyp). "Potential medium to departed to medium communication of pictorial information: Exploratory evidence consistent with psi and survival of consciousness." The Noetic Journal 2(3) July, 1999. (Return to text)
1. Fans of Martin Gardner will recognize the similarity of this title to that of Martin's book How Not to Test a Psychic (1989, Prometheus Books). I thank Martin Gardner for agreeing to let me adapt his title for this review. (Return to text)
2. The principle usually attributed to William of Occam is typically stated as "entities are not to be multiplied beyond necessity." This statement as such, cannot be found in the extant writings of William. The principle was known before William was born. However, he did write many different statements that are consistent with the principle such as "It is vain to do with more what can be done with fewer." (Return to text)
3. Wiseman, R. & O'Keeffe, C. (2001). Accuracy and replicability of anomalous after-death communication across highly skilled mediums: a critique. The Paranormal Review, 19:3-6. [Also in The Skeptical Inquirer, November/December 2001) (Return to text)
4. Schwartz, G.E. (2001). Accuracy and replicability of anomalous after-death communication across highly skilled mediums: a call for balanced evidence-based skepticism. The Paranormal Review: 20. (Return to text)
5. For discussion of this concept and for a very striking illustration of subjective validation in operation see Marks, D. (2000, second edition), The Psychology of the Psychic. Amherst, NY: Prometheus Books. (Return to text)
6. Schwartz, G.E., Geoffrion, S., Shamini, J., Lewis, S, and Russek, L.(Submitted to the Journal of the Society for Psychical Research). Evidence of anomalous information retrieval between two research mediums: Replication in a double-blind design. (I obtained a copy of this report from Professor Schwartz in August, 2001). (Return to text)
7. Unfortunately, the double-blind procedure was not ideal. The research coordinator, who was aware of the sitter's identity, phoned LC and the sitter just before the reading. In this way, the medium had contact with someone who was aware of sitter's identity just prior to the reading. (Return to text)
By rickyjames, Section News
Posted on Thu Jun 5th, 2003 at 08:16:40 AM EST
USA Today had a front-page article in yesterday's edition that's not to be missed. If nothing else, the article reflects the level of percerption and information the general public has on this topic. Reviewed were previous efforts and techniques that have discovered tens of Jupiter-sized planets in the past ten years...and new efforts that will identify life-bearing Earth-sized planets in the next two decades. Excerpts:
"We know for certain there are a hell of a lot of planets out there, easily 10 billion in our galaxy alone," says astronomer Steve Vogt of the UCO/Lick Observatory at the University of California-Santa Cruz. Current data suggest at least 12% of nearby stars have Jupiter-size planets, he says, and 3% might have Earth-size ones. Within 10 years, an Earth-size planet -- the size that scientists consider the most likely to contain oceans and therefore life -- is expected to turn up in searches by two scheduled NASA probes. Astronomers hope to be able to detect life, or rule it out, in such places within 20 years.
Update [2003-6-6 12:37:30 by rickyjames]: A couple of other interesting space-exploration links I've run across lately that I don't intend to write up as a full blown story are here and here. Check 'em out.
The USA Today article also detailed a number of planned space missions are aimed squarely at detecting Earth-size planets:
Kepler: Named after Johannes Kepler, the 16th-century astronomer who divined the laws of orbital motion, this NASA mission in 2006 is expected to find about 30 Earth-size planets in Earthlike orbits. It will detect ''transits,'' the slight dimming of a star's light that occurs when a planet passes in front of it.
Space Interferometry Mission (SIM): A NASA mission in 2009 expected to detect one or two Earthlike planets from a survey of 200 nearby stars. It will measure side-to-side gravitational wobbles that planets produce on their stars via gravitational pull.
Terrestrial Planet Finder (TPF): Scientists are still deciding what instruments to install in this NASA search for life in 2014 on an Earthlike planet. ''The goal is a picture of an Earthlike planet to run in every newspaper, above the fold,'' says Traub, a member of one TPF development team.
Darwin: The European Space Agency plans to launch this flotilla of small telescopes in 2015. By combining the light from telescopes to make a powerful ''interferometer'' device, the mission should be capable of detecting chemical composition of atmospheres on Earthlike planets discovered by SIM or other missions. For example, detection of high levels of oxygen would be a dead givaway for photosynthesis - and life.
Last Updated Fri, 06 Jun 2003 10:59:15
VANCOUVER, B.C. - British Columbia is about to make history when it confers official status on doctors of Traditional Chinese Medicine.
It will be the first government in North America to officially recognize and regulate practitioners of the 4,000-year-old system of medicine.
At a ceremony at University of British Columbia next week, the province's health minister will officially confer the title to more than 200 graduates.
Although the doctors won't be given equal footing to conventional western doctors, it's a feat for the hundreds of traditional Chinese medicine doctors, who have been fighting to legitimize the practice.
FROM Apr. 14, 2003: B.C. licenses Chinese medical practitioners
"I'm happy. In the end I got recognized, I'm going to get a doctor of traditional Chinese medicine here," said Ting Ting Jiang.
When Jiang practiced traditional Chinese medicine in China she was formally recognized as a doctor. When she moved to Canada she was not recognized as a doctor.
In China, both doctors of traditional and western medicine are formally recognized and can work in hospitals.
Under new licensing regulations, the B.C. government and the College of Traditional Chinese Medicine (CTCMP) will regulate who can and can't practice the ancient healing art.
The CTCMP is a non-profit organization created by an act of the B.C. government. Its operations are funded by registration and accreditation fees.
College chair Mason Loh hopes that the province will set an example for other provinces and the United States.
"The real beneficiaries of the licensing of Chinese medical practitioners are the public," he said.
The move will make it easier and safer for patients to choose a Chinese medicine doctor.
He hopes it will be the beginning of a move toward integrating eastern and western styles of healing.
The next step for the college will be to include the costs of a visit to a doctor of traditional Chinese medicine under the province's medical plan.
Written by CBC News Online staff
Homeopathy works on the principle that water retains a 'memory' and now scientists may have stumbled across proof that homeopaths are right.
Today a study reported in the New Scientist magazine shows that even when diluted to homeopathic levels, salt solutions change the structure of hydrogen bonds in water.
The alternative health practice involves treating patients with samples diluted many times until they are unlikely to contain a single molecule of therapeutic substance.
For this reason it is ridiculed by many scientists. But practitioners maintain that the water in samples retains a 'memory' of the substances dissolved in it.
Swiss chemist Louis Rey made the discovery while using a technique called thermoluminescence to study molecular structure.
Rey diluted samples of lithium chloride and sodium chloride far beyond the point where any molecules of the original substance could remain.
But after close examination, it appeared that the diluted solutions were different to that of pure water, 'proof' that water has a memory of dissolved substances.
Martin Chaplin, from South Bank University, London, an expert on water and hydrogen bonding, is sceptical of the results.
He suggested that tiny amounts of impurities in the samples, perhaps due to inefficient mixing, may have accounted for Rey's observations.
19:00 11 June 03
Exclusive from New Scientist Print Edition
Claims do not come much more controversial than the idea that water might retain a memory of substances once dissolved in it. The notion is central to homeopathy, which treats patients with samples so dilute they are unlikely to contain a single molecule of the active compound, but it is generally ridiculed by scientists.
Holding such a heretical view famously cost one of France's top allergy researchers, Jacques Benveniste, his funding, labs and reputation after his findings were discredited in 1988.
Yet a paper is about to be published in the reputable journal Physica A claiming to show that even though they should be identical, the structure of hydrogen bonds in pure water is very different from that in homeopathic dilutions of salt solutions. Could it be time to take the "memory" of water seriously?
The paper's author, Swiss chemist Louis Rey, is using thermoluminescence to study the structure of solids. The technique involves bathing a chilled sample with radiation. When the sample is warmed up, the stored energy is released as light in a pattern that reflects the atomic structure of the sample.
When Rey used the method on ice he saw two peaks of light, at temperatures of around 120 K and 170 K. Rey wanted to test the idea, suggested by other researchers, that the 170 K peak reflects the pattern of hydrogen bonds within the ice. In his experiments he used heavy water (which contains the heavy hydrogen isotope deuterium), because it has stronger hydrogen bonds than normal water.
After studying pure samples, Rey looked at solutions of lithium chloride and sodium chloride. Lithium chloride destroys hydrogen bonds, as does sodium chloride, but to a lesser extent. Sure enough, the peak was smaller for a solution of sodium chloride, and disappeared completely for a lithium chloride solution.
Aware of homeopaths' claims that patterns of hydrogen bonds can survive successive dilutions, Rey decided to test samples that had been diluted down to a notional 10-30 grams per cubic centimetre - way beyond the point when any ions of the original substance could remain. "We thought it would be of interest to challenge the theory," he says.
Each dilution was made according to a strict protocol, and vigorously stirred at each stage, as homeopaths do. When Rey compared the ultra-dilute lithium and sodium chloride solutions with pure water that had been through the same process, the difference in their thermoluminescence peaks compared with pure water was still there.
"Much to our surprise, the thermoluminescence glows of the three systems were substantially different," he says. He believes the result proves that the networks of hydrogen bonds in the samples were different.
Martin Chaplin from London's South Bank University, an expert on water and hydrogen bonding, is not so sure. "Rey's rationale for water memory seems most unlikely," he says. "Most hydrogen bonding in liquid water rearranges when it freezes."
He points out that the two thermoluminescence peaks Rey observed occur around the temperatures where ice is known to undergo transitions between different phases. He suggests that tiny amounts of impurities in the samples, perhaps due to inefficient mixing, could be getting concentrated at the boundaries between different phases in the ice and causing the changes in thermoluminescence.
But thermoluminescence expert Raphael Visocekas from the Denis Diderot University of Paris, who watched Rey carry out some of his experiments, says he is convinced. "The experiments showed a very nice reproducibility," he told New Scientist. "It is trustworthy physics." He see no reason why patterns of hydrogen bonds in the liquid samples should not survive freezing and affect the molecular arrangement of the ice.
After his own experience, Benveniste advises caution. "This is interesting work, but Rey's experiments were not blinded and although he says the work is reproducible, he doesn't say how many experiments he did," he says. "As I know to my cost, this is such a controversial field, it is mandatory to be as foolproof as possible."
For more exclusive news and expert analysis every week subscribe to New Scientist print edition.
THE TWISTED ORIGIN OF SPHEROMAKS. Researchers at the California Institute of Technology have made important progress in solving a long-standing mystery concerning the formation of spheromaks, self-organizing toroidal plasma configurations that are superficially reminiscent of smoke rings. It is well known that current-carrying plasmas embedded in an initial seed magnetic field can form spheromaks. The formation process is believed to involve some kind of dynamo process, whereby the internal magnetic fields become re-arranged or even amplified so as to achieve a stable minimum energy state for the internal magnetic forces. (Similar minimal-energy state arguments help explain why soap bubbles, for example, tend to be spheres rather than cubes or other shapes.) But until now, no one has definitively demonstrated just how a plasma transforms from an unstable, high internal energy configuration into a spheromak. The new experiment sheds light on the phenomenon by capturing images of plasmas as spheromaks form. The images show that plasma currents initially flow in straight lines along a confining magnetic field. Owing to an effect known as the kink instability, the plasma currents develop bends that twist into a helix (see image at www.aip.org/mgr/png ). The helix acts like a coiled current element, or solenoid, which amplifies the original, straight magnetic field. Above a certain threshold in the initial magnetic field, detached plasma spheromaks are formed. The researchers (contact: Paul Bellan, email@example.com, 626-395-4827) confirmed the theory behind the effect by measuring the rapid amplification of the magnetic field inside developing plasma solenoids. Spheromaks are potentially promising routes to plasma-based nuclear fusion, and insight into their formation will help in the design of future experiments-and possibly even a clean, safe energy source. In addition, spheromak formation is important for explaining the behavior of plasma in the solar corona, as well as understanding the physics of jets that sprout from black holes, galactic nuclei, and other astrophysical objects. (S. C. Hsu and P. M. Bellan, Physical Review Letters, 30 May 2003)
A CARBON NANOTUBE COMPOSITE FIBER, made by injecting single-walled nanotubes into a pipe filled with polyvinyl alcohol to form a gel, can be spun out into100-meter strands. According to the scientists at the University of Texas at Dallas who created the spinning process, the resulting fibers are "tougher than any natural or synthetic organic fibre described so far," with a tensile strength of 1.8 gigapascals. The 50-micron-diameter fibers are 60% nanotube by weight and has been woven into a fabric. In textile form, the researchers suggest, their composite material could be used for making distributed sensors, antennas, capacitors, and even batteries. (Dalton et al., Nature, 12 June 2003.)
"COLOR FILTERING" AT THE ATOMIC LEVEL. One of the most astounding inventions of the late 20th century, the scanning tunneling microscope, or STM, yields atomic-scale landscapes of electrically conducting surfaces such as metals. Now, researchers at the Colorado School of Mines (Peter Sutter, firstname.lastname@example.org) have demonstrated a new technique, called "energy-filtered STM," which is analogous to putting a color filter on an ordinary microscope. Just as color filters make it easier to discern desired features in a photograph, color-filtered STM makes it easier to distinguish between chemically similar atoms, something that's usually very difficult to do. It can even identify specific chemical bonds on a surface. Conventional STMs employ a metal tip, which, as it turns out, is generally most sensitive to the highest-energy electrons on the surface. These electrons jump or "tunnel" to the tip, giving scientists data to reconstruct an image of the surface. This preference for the highest-energy electrons can be a problem, because it can obscure the signal from lower-energy electrons, which may be associated with different atoms or different kinds of chemical bonds. To address this issue, the new technique employs an indium arsenide (InAs) tip. InAs is a semiconductor, and all semiconductors have a "fundamental bandgap," a range of energies that no electrons can possess because of the 3D atomic structure of the material. In the case of a semiconductor tip very close to a conducting surface, what's more important is something called a "projected gap," a range of forbidden energies that appears when the 3D electronic structure is seen along the tip axis. So because of the projected gap, electrons in a certain energy range cannot tunnel to the tip. Adjusting the voltage between the tip and sample can shift this projected gap so that it blocks off the high-energy electrons, making the tip more sensitive to electrons in lower-energy bonds at the sample surface (see images at http://www.aip.org/mgr/png ). Researchers can shift this range of forbidden electron energies repeatedly, to build up, for example, maps of specific chemical bonds on a surface, and to analyze how abundant one type of chemical bond is compared to others. This technique is now being explored for 'atom-by-atom' mapping of the composition of alloys of chemically similar elements, which is important for certain technologies such as thin-film growth, which often involve nanometer scale variations in the composition of alloys (Sutter et al., Physical Review Letters, 25 April 2003)
PHYSICS NEWS UPDATE is a digest of physics news items arising from physics meetings, physics journals, newspapers and magazines, and other news sources. It is provided free of charge as a way of broadly disseminating information about physics and physicists. For that reason, you are free to post it, if you like, where others can read it, providing only that you credit AIP. Physics News Update appears approximately once a week.
By Douglas Vakoch
Special to SPACE.com
posted: 07:00 am ET
11 June 2003
Among scientists involved in the Search for Extraterrestrial Intelligence (SETI), it's quite common to be focused on the future, ever mindful that it could take years, or even decades, to find a signal from otherworldly intelligence. But if historian Steve Dick has his way, astronomers will also turn their attention toward the past as they search for life beyond Earth.
"I am a firm believer that history should inform our present actions, that we should learn from our past to help us make good decisions for the future," says Dick, author of Life on Other Worlds: The 20th Century Extraterrestrial Life Debate. "You've heard the saying that those ignorant of history are condemned to repeat it; … without prior preparation based in part on history, we could make big mistakes with big consequences."
Some have suggested that we can anticipate people's reactions to detecting extraterrestrials by recalling the response to the 1938 radio adaptation of H. G. Wells' novel War of the Worlds. Among the listeners who tuned in after the beginning of this radio drama about an invasion of Earth by Martians, some panicked when they believed the play to be a news program about a real event.
Dick cautions against using examples like this to predict reactions to a real SETI detection. Most egregiously, Dick argues, such comparisons use hypothetical face-to-face encounters to predict responses to a SETI detection involving signals sent across interstellar distances. "The Orson Welles War of the Worlds broadcast is the ultimate example of what radio listeners thought was physical culture contact," which Dick argues "is very different from the impact if the extraterrestrials are light years away."
As an alternative, Dick proposes a fundamental principle that should guide "the broader idea that history can and should inform SETI impact studies." According to Dick, we should focus on "analogues based on the transmission of ideas between cultures, rather than on physical culture contact."
"The analogue most often cited is contact between cultures on Earth, which usually led to disaster: Cortez and the Aztecs, Pizarro and the Incas, the Europeans and the American Indians, and many others." For predicting responses to a SETI detection, however, Dick maintains, "these are not good analogues because they involve physical culture contact."
Instead, he proposes analogues based on the dispersion of ideas between cultures. "If a SETI signal is deciphered," Dick suggests, "a tantalizing terrestrial analogue is the transmission of Greek science via the Arabs to the Latin West in the 12th and 13th centuries." In this analogy, "the Greeks are the extraterrestrials," the Islamic scholars serve as the bearers of the message from one culture to another, and "the medieval translators and commentators in Spain and elsewhere are those who bring the new knowledge to the masses."
According to Protocol
Following standard research practices, the first steps that astronomers will take after detecting a signal reflect the normal process of science. After confirming the validity of their discovery, they will share their findings with the world. Even if the research team detecting a signal attempted to keep the discovery quiet, in the process of confirming with astronomers at other observatories, it seems unlikely they would be able to conceal the news for long. In Dick's view, "human nature will not allow that to be kept a secret."
Though there may be frustratingly little to report in the days following signal detection, Dick emphasizes the importance of being as complete as possible from the beginning: "In general people are eager and willing to believe in ETs on the slightest evidence. So, we'd better get the announcement right the first time, as well as any further details. In the absence of information, rumor will fill the void."
There should be no great uncertainty in determining that we have detected intelligence beyond Earth, in Dick's view, because the signals themselves would be unlike anything found in nature: "We are looking for very narrow band signals of the kind that nature does not generate, so the fact that a signal is artificial should be obvious." Instead, Dick expects the ambiguity would come later: "It will be much harder to decipher any message, a problem that I think SETI scientists underestimate. We know from cosmic evolution that civilizations could be billions of years old. Whether we could communicate with such entities is doubtful."
An Apt Comparison
Given the potential value of analogies in SETI, how can we be sure we are making appropriate comparisons? "In order to have a good analogy to contact," says Dick, "one needs very much to specify the scenario." As we have already seen, it is vital to distinguish between scenarios where evidence is transmitted across interstellar distances and scenarios where contact is up close and personal. But even within SETI scenarios involving contact at a distance, we need to consider several other factors.
"Is there just a 'dial tone' signal that indicates general intelligence, or is there a flow of information? Fast flow after immediate translation, or slow flow after perhaps generations of decipherment?" As Dick emphasizes, "These are all very different cases of contact."
In spite of his enthusiasm for using historical analogues, Dick urges vigilance: "I have always been careful to emphasize that historical analogues cannot be used to make predictions. They can only serve as guides to our thinking, as tools that tell us that not all things are possible, nor is just one thing possible." Though analogies can play an important role in preparing for the detection of life beyond Earth, as Dick reminds us, "they are guidelines that must be used with caution."
For accurate instructions on how to subscribe or unsubscribe to the listserv, follow this link: http://www.mediaresource.org/instruct.htm
If you experience any problems with the URLs (page not found, page expired, etc.), we suggest you proceed to the home page of "Science In the News" http://www.mediaresource.org/news.htm which mirrors the daily e-mail update.
IN THE NEWS
Today's Headlines – June 13, 2003
SIGN UP TODAY for "Science in the News Weekly," an e-newsletter produced by Sigma Xi's Public Understanding of Science programs area in conjunction with "American Scientist Online." The newsletter provides a digest of the week's top stories from "Science in the News," and covers breaking news and feature stories from each weekend not normally covered by "Science in the News." Subscribe now by clicking here:
If you have any questions about subscription or content, please email us at email@example.com.
ANCESTRY OF AIDS VIRUS IS TRACED
from The Washington Post
New research suggests that chimpanzees and human beings each acquired their versions of the AIDS virus the same way -- by killing and butchering other primates infected with similar microbes.
The prevailing theory of how the first human got AIDS is that about 75 years ago, someone in central Africa cut himself while butchering a chimpanzee for food. The animal was infected with simian immunodeficiency virus (SIV), which after a few adaptive mutations evolved into human immunodeficiency virus (HIV) in its new host. This is known in AIDS research circles as the "cut-hunter hypothesis."
A team of European and American scientists offered in today's issue of
Science what might be called the "cut-chimpanzee hypothesis" of how that
species acquired SIV, the genetic "father" of HIV. Sometime deep in
prehistory, they say, a chimpanzee killed and consumed two monkeys -- a red-
capped mangabey and a greater spot-nosed monkey -- each infected with its
own particular strain of SIV. Both viruses got into the chimp's
bloodstream, probably through an open wound.
METEOR IMPACT IS LINKED TO EXTINCTION OF FISH
from The New York Times
Just as dinosaurs died out 65 million years ago when a meteor struck the earth, many fish and other creatures of an earlier era — about 380 million years ago — may have been similarly killed off.
Writing in today's issue of the journal Science, geologists at Louisiana State University, the University of Texas at Arlington and the Scientific Institute in Morocco report several lines of evidence that point to a meteor impact that coincides with a mass extinction.
Most life was still contained in the oceans then, in the middle of the
Devonian geological period that is often called the "age of fishes." The
extinction, while global in scale, was less severe than the half-dozen
major extinctions in the earth's history.
Please follow these links for more information about Sigma Xi, The Scientific Research Society:
Sigma Xi Homepage
Media Resource Service
American Scientist magazine
For feedback on In the News,
To believers, the details of Mary's apparitions are well known: She wears a veil, has brown hair and blue eyes, and emerges from a bright light. Her visits vary in timing and duration."
More at http://www.cantonrep.com/index.php?Category=8&ID=105419&r=1
UC Berkeley NewsCenter
By Robert Sanders, Media Relations | 29 May 2003
BERKELEY – The first detailed map of space within about 1,000 light years of Earth places the solar system in the middle of a large hole that pierces the plane of the galaxy, perhaps left by an exploding star one or two million years ago.
The new map, produced by University of California, Berkeley, and French astronomers, alters the reigning view of the solar neighborhood. In that picture, the sun lies in the middle of a hot bubble - a region of million-degree hydrogen gas with 100-1,000 times fewer hydrogen atoms than the average gas density in the Milky Way - and is surrounded by a solid wall of colder, denser gas.
Instead, said astronomer Barry Welsh of UC Berkeley's Space Sciences Laboratory, the region around the sun is an irregular cavity of low-density gas that has tunnels branching off through the surrounding dense gas wall. Welsh and his French colleagues suspect that the interconnecting cavities and tunnels, analogous to the holes in a sponge, were created by supernovas or very strong stellar winds that swept out large regions and, when they encountered one another, merged into passageways.
Formerly secret files finally reveal the truth about the world's most famous UFO incident.
BY JIM WILSON
Miles of government records, including those pertainingto the events at Roswell, are kept in this temperature-controlled vault.
Knowing that old military records often contain startling revelations, we were eager to see what surprises awaited us in the latest disclosure from the Roswell files. For decades, UFO researchers had clamored for the National Archives to come clean about a "flying disc" that supposedly crashed during the Fourth of July weekend in 1947. The Army started the story when it used the term in a press release about a crash that had occurred north of Roswell, N.M. At the time, the sleepy town had the distinction of being home to the only atomic bomber unit (at Roswell Army Airfield) in the world. By the end of that holiday weekend, the Army had retracted its original story, claiming it had been a mistake. The debris that ranch manager Mac Brazel had found on the J.B. Foster Ranch was simply the remains of a weather balloon. The press reported the revised version of events, and the story promptly died.
Government paperwork, however, is immortal. Once a Roswell file was created it became a collection bin for all sorts of UFO-related material. Eventually, the collection moved to a climate-controlled archive in College Park, Md., about a half-hour drive east of Washington, D.C. And there the files would have remained undisturbed were it not for a law that forces the government to periodically review a document's security classification.
During the 1990s, the time limits on keeping Cold War-era records began to expire. Journalists had come to appreciate that these automatically declassified files often contained spectacular information. Declassified records showed how the Atomic Energy Commission intentionally released radiation from its reactors in Hanford, Wash., on unsuspecting civilians in that area. Other disclosures described how doctors working for the federal government were permitted to conduct ghoulish medical experiments on women, children and prisoners. Buoyed by these earlier disclosures, UFO researchers had good reason to hope the 11 boxes of newly opened Roswell files might contain a similar smoking gun.
POPULAR MECHANICS was interested too. In 100 years of covering military affairs, our editors had come to realize that everything the government touches creates a paper trail. If something extraordinary had happened at the Roswell Army Airfield in July 1947, evidence would turn up in the paperwork compiled by sergeants and officers. And it was that possibility that lured us to Maryland. If aliens had landed, the soldiers who chased them would have left a paper trail, too.
As we had hoped, we found the original government records. But first we had to do some digging. Most of the boxes were filled with newspaper and magazine clippings about flying saucers, old books, and government UFO reports that were made public decades ago. There were Betamax videotapes about UFOs. We even found the remains of the infamous balloon reflector, which UFO buffs claim the government planted at the crash site in place of the pieces of the flying disc. These pieces, they say, were taken to an undisclosed military base.
Then, amid the clutter, in Box 1, we found what we were after. It was a seemingly unimportant document titled "Morning Reports, July 1947." This was essentially a log of the day-to-day activity at the base. In much the same way that a police blotter would provide evidence of a bank robbery, the Morning Reports would provide unambiguous evidence of unusual military activity.
As we worked through the Morning Reports line by line, we came to a simple realization: Absolutely nothing extraordinary had happened at Roswell that Fourth of July weekend. There was no indication of an emergency, no mention of a deployment of rescue and firefighting crews, as was the case with other crashes. That was one mystery solved.
For years, UFO researchers had claimed that enlisted men and officers involved in the disc recovery operation were transferred to other bases to ensure their silence. Sure enough, the transfers took place. The paperwork explained why. Several months earlier, in a sweeping postwar military reorganization, Army fliers were systematically transferred to the newly created U.S. Air Force. The men had not been transferred. They had merely changed uniforms. Another mystery solved.
We left the National Archives and Records Administration complex in College Park more confused than enlightened. Surely there was more to the story. As the old saying goes: "Absence of evidence is not evidence of absence." So with this thought in mind we decided to call Frank Kaufmann, the man at the center of the Roswell episode.
Kaufmann was less than the picture of health when we interviewed him for our July 1997 cover story about the 50th anniversary of Roswell. A respected member of the Roswell business community, Kaufmann's name had been tied to the story from the very beginning. We were saddened to learn he had died in February 2001.
Kaufmann's credibility arose from his willingness to show journalists and book authors a copy of his Separation Qualification Record, which supported his claim that he was an intelligence officer stationed at Roswell in July 1947. After Kaufmann's death, his widow gave UFO researchers access to her husband's papers. One of those researchers was Mark Rodeghier, scientific director of the J. Allen Hynek Center for UFO Studies. In fall 2002, Rodeghier published his findings in the center's journal: "To put it quite plainly, Frank Kaufmann created an altered version of an official document to present a false version of his military career consistent with his claims about his involvement with the events at Roswell. His supposed work in intelligence was used to explain how he came to be so knowledgeable about what crashed at Roswell and the subsequent military cover-up."
Stanton Friedman, a physicist and author of several books on Roswell, tells POPULAR MECHANICS he was not surprised that Kaufmann had altered his service record. "He got out of the service in 1945. He was a civilian employee doing the same job, as a clerk."
Friedman says his suspicions were raised during a 1999 meeting with Kaufmann and several other researchers. He says he asked Kaufmann pointed questions about his working relationship with Maj. Jesse Marcel, who was an intelligence officer at the Roswell base in '47, and Col. William Blanchard, the base commanding officer at that time.
"Kaufmann knew he was dying," Friedman says, explaining why he trusted the answers he received. "I asked Kaufmann, 'Did you take Marcel to the [crash] site?' He said 'no.' I asked him, 'Did you take Blanchard to the site?' He said 'no.'"
Despite the disappointing document disclosure by the National Archives and the discovery that Kaufmann had altered his military records, Friedman says it is premature to close the books on Roswell. He believes convincing evidence of an alien landing exists but that it has yet to be disclosed. And he says he knows exactly where to find it--in vaults at the National Reconnaissance Office and the Central Intelligence Agency.
A study suggests those who consider themselves unlucky are more likely to believe in superstitions associated with bad luck, such as the number 13. What is more, the researchers say, this belief alone can actually lead to "bad luck".
Psychologist Dr Richard Wiseman, who carried out the research, said: "Unlucky people tend to buy into negative superstitions, like having seven years bad luck after smashing a mirror.
"If you're one of these people, the fact that it's Friday the 13th could make you anxious and that will make you more likely to have accidents, drive less well, and perhaps find it harder to relate to other people. "So your bad luck could be your own doing."
More controversially, Dr Wiseman believes some people actually want to be unlucky because it helps them to avoid taking responsibility for their own failings.
"It's a way of copping out," he said.
Dr Wiseman, of the University of Hertfordshire, said a quarter of those surveyed thought the number 13 was unlucky.
A total of 4,000 people were asked if they considered themselves lucky or unlucky, and whether they engaged in any superstitious behaviour. The survey found that "lucky" people tended to believe in superstitions designed to bring good luck, such as touching wood, crossing fingers and carrying a lucky charm.
"Unlucky" people were drawn to bad luck superstitions, such as breaking a mirror, walking under a ladder, or having anything to do with the number 13.
The results showed that 49% of lucky people regularly crossed their fingers compared with 30% of unlucky people.
In contrast, just 18% of lucky people became anxious if they broke a mirror, compared with 40% of unlucky people.
But the number 13 brought out the biggest difference between the lucky and unlucky, with more than half of people who considered themselves unlucky dreading the number, as opposed to just 22% of lucky people.
The most widely held superstitious belief was touching wood, which 86% said they did.
That was followed by crossing fingers (64%), not walking under ladders (49%), fear of breaking a mirror (34%), being worried about the number 13 (25%), and carrying a lucky charm (24%).
Dr Wiseman said: "These are surprisingly high figures, and indicate that superstition is alive and well in modern Britain. "Indeed, amazingly, 86% of Brits said that they carried out at least one of these superstitious behaviours.
"Even scientists are not immune from superstition. For example, 15% of
people with a background in science said that they feared the number 13."
Dr Wiseman has set up a website to continue his Luck Project, where anyone
can contribute to the research.
Various schools of thought have proposed the idea that our world is mere appearance, and that there is some kind of underlying mystical truth that can explain everything.
For example, religious mystics propose that it is the supernatural that is the true reality, meditators propose the absence of thought as a profoundly significant state of being, Idealist philosophers propose a "realm of ideas" which is the true reality, promoters of Near-Death Experiences propose that the NDE is the higher reality, and so on. Any idea or experience which diverges from daily experience is inevitably pointed as the answer.
While it is not part of this extreme, a recent argument by Dr. Nick Bostrom (Department of Philosophy, Yale University) has made modest waves in the media. According to these reports, Bostrom believes that we are in fact probably living in a computer simulation.
His reasoning is fairly simple. There will be a time where we are able to simulate sentient life at a large scale. If that is so, then there will be an enormous number of lives which will be simulated in the future. Eventually, it is not too far-fetched to think that this number will be far greater than the number of people that have ever lived.
True, this argument is based on two premises. First, it would require computational capacities which can scarcely be imagined at the moment, and secondly, it also assumes that artificial intelligence is possible. But we can assume that such capacities will surely exist in the future.
Given that the number of future simulated beings far surpasses the number of living beings, the argument goes, we must conclude that it is probable that we are simulated beings as well. That is, it is probable that we are part of the simulations of the human species from the future : we are nothing but a reproduction of the state of Earth as programmed by the real, technologically-advanced humans.
At first glance, the argument seems statistically convincing. After all, while biological reproduction is limited, computational production of simulated beings is only limited by technology.
But if one looks at the argument more carefully, it is very similar to the epistemic skeptic "brain in a vat" argument.
The "brain in the vat" argument posits a situation where you are a brain in a vat of preserving fluid, plugged with electrodes which transmit information about the world "as if one was in reality", and adapts to desired movement as well. In such a situation, as goes the argument, one could not make the difference with a person "in reality", and therefore it is not wise to reject such a situation as a possibility.
Likewise, the "simulated reality" argument posits a situation where we are fooled in thinking that we are "in reality" when we are really in a simulation. While the argument does not seek to attack our understanding of reality, it is still, at its basis, an old skeptical argument in a new garb.
As such, the two realist objections still apply to it. In any such situation, there are two possibilities : either the simulation is imperfect and there is a way to discover its true nature, or the simulation is perfect. If the simulation is imperfect and there is a way to discover its true nature, then we shall eventually find it, and then the hypothesis will be proven. If the simulation is perfect, then it is a redundant hypothesis which is of no rational value, since it cannot be proven or falsified, and explains nothing.
So we have to conclude that even if the "simulated reality" hypothesis is possible, it cannot be considered rationally valid, just like the "brain in a vat" hypothesis.
Actually, there is something else I must tell you. The argument I have refuted is not quite what Bostrom said. Rather, he proposes that there are three possibilities :
"(1) the human species is very likely to go extinct before reaching a "posthuman" stage; (2) any posthuman civilization is extremely unlikely to run a significant number of simulations of their evolutionary history (or variations thereof); (3) we are almost certainly living in a computer simulation" (see www.simulation-argument.com).
Proponents of skeptical arguments would choose the third option as most probable. I have argued elsewhere that I think the first option is the most probable (in my article "The hypothesis of sentient self-destruction"), and indeed the fact that we do not observe ourselves as part of a simulation so far does seem to confirm my hypothesis.
But whenever I am right on that or not is irrelevant here : unless we have evidence of being in a simulation, the third option is simply badly formulated. Either we live in a computer simulation, or there is no point in even considering it.
I can think of two more objections to the "simulated reality" hypothesis, on theoretical grounds. One fact that seems problematic to me is the fact that we are able to discuss this hypothesis. If this reality is controlled by people running a simulation, then why haven't they hard-coded obstacles to discussion of the simulation, or at least saw that it was under discussion and stopped Dr. Bostrom's research ?
My second objectionable fact is the future existence, and destruction, of sentient simulations. I would like to think that a human government that lasts that long would also afford us enough freedom to protect our lives, if anything else. In such a government, surely the destruction of sentience would be illegal (and in case you want to argue that a simulated sentience would be inferior in capacities, we have already assumed that the simulated consciousness was functionally equivalent).
Of course, there is always a last way out : we can simply assume that the computational capacities to make these simulations is impossible, or that artificial intelligence on par with humans is impossible. However, I see no reason to make that assertion, and as we know, it is usually quite foolish to put limits on future technology.
Evangelicals in the US believe there is a biblical basis for opposing the Middle East road map
Monday June 9, 2003
Just as new life is being breathed into the peace process, religious groups throughout the US are whipping up hostility to the road map. The aim of the Christian-Jewish "interfaith Zionist leadership summit" held in Washington last month was "to oppose rewarding murderous Palestinian terrorism with statehood". Attending the conference were some of the most influential figures of the Christian right; behind them a whole infrastructure of churches, radio stations and bible college courses teaching "middle-east history".
Since the late 19th century, an increasing number of fundamentalists have come to believe that the second coming of Christ is bound up with the political geography of Israel. Forget about the pre-1967 boundaries; for them the boundaries that count are the ones shown on maps at the back of the Bible.
The acceptance of the state of Israel by the UN in 1949 brought much excitement to those who believed the second coming was being prepared for. A similar reaction greeted the Six Day war in 1967. The displacement of Palestinians mattered little compared with the fulfilment of biblical prophecy. Writing in Christianity Today immediately after the Six Day war, Billy Graham's father-in-law, Nelson Bell, claimed the fact that "for the first time in more than 2,000 years Jerusalem is now completely in the hands of the Jews gives the student of the Bible a thrill and a renewed faith in its accuracy and validity."
So as the international community withdrew its embassies after the war, and the UN passed resolution 242 condemning Israel's occupation of the West Bank, the International Christian Embassy was set up to show support for Israel. Since then the Christian right has staunchly opposed trading land for peace or any attempt to broker a settlement by power-sharing arrangements. The destruction of the al-Aqsa mosque continues to be sought after by both Christian and Jewish fundamentalists. US churches are encouraged to form links with Jewish settlers via email and to support them through fundraising.
Happy to have any friend it can get, the Israeli government has long since exploited its connections with far-right US Christian groups. While moderate Christians, such as the Palestinian Bishop of Jerusalem, cannot get to see Ariel Sharon despite repeated requests, the door is always open to southern Baptists and TV evangelists.
What is astonishing about this marriage of convenience is that their version of evangelical Christianity believes that biblical prophecy leads to Armageddon and finally to the conversion of the Jews to Christ. According to the most influential of the Christian Zionists, Hal Lindsey, the valley from Galilee to Eilat will flow with blood and "144,000 Jews would bow down before Jesus and be saved, but the rest of Jewry would perish in the mother of all holocausts". These lunatic ravings would matter little were they not so influential. Lindsey's book, The Late Great Planet Earth, has sold nearly 20m copies in English and another 30m-plus worldwide.
Against this crazy theological background, an ideological battle is now being waged. Despite the fact that apocalyptic prophecy as read by the Christian right ends with another holocaust, some Israeli politicians and journalists are encouraging fundamentalists to stick by the implications of their narrative. In a recent column in the Jerusalem Post, Michael Freund called upon evangelical Christians to lobby against the pressure being put on George Bush by Tony Blair and Colin Powell. "If Jesus were alive today," he wrote, "the US state department would likely criticise him for being a Jewish settler and an obstacle for peace."
There are 45 million evangelicals in the US and they represent a crucial block vote for born-again Bush. It is therefore to his credit that he has resisted their pressure and managed to persuade Sharon to accept the peace plan. Perhaps Bush is able to take the evangelical vote for granted in much the same way as Blair is able to take the left's vote for granted: both have nowhere else to go.
Yet Bishop Riah Abu El-Assal of Jerusalem doesn't trust Bush. He thinks the combination of European impotence and the US's refusal to pressure Israelis into stopping building settlements means the plan is already dead in the water. "It took them six days to occupy the Palestinian territories; they could get out in three," he says. Bishop Riah has persuaded the World Council of Churches to call for sanctions on all products from the occupied territories.
The diocese of Jerusalem runs hospitals in Gaza and Nablus. It's in places like these that the real work of Christian ministry is conducted. By contrast, US evangelicals oppose the peace process and swarm into Iraq to convert its people to Jesus.
The Rev Dr Giles Fraser is the vicar of Putney and lecturer in philosophy at Wadham College, Oxford