Confirmation Bias



Confirmation bias is the tendency to search for, interpret, favor, and recall information in a way that confirms or supports one’s prior beliefs or values.[1] People display this bias when they select information that supports their views, ignoring contrary information, or when they interpret ambiguous evidence as supporting their existing attitudes. The effect is strongest for desired outcomes, for emotionally charged issues, and for deeply entrenched beliefs. Confirmation bias cannot be eliminated entirely, but it can be managed, for example, by education and training in critical thinking skills.

Confirmation bias is a broad construct covering a number of explanations. Biased search for information, biased interpretation of this information, and biased memory recall, have been invoked to explain four specific effects: 1) attitude polarization (when a disagreement becomes more extreme even though the different parties are exposed to the same evidence); 2) belief perseverance (when beliefs persist after the evidence for them is shown to be false); 3) the irrational primacy effect (a greater reliance on information encountered early in a series); and 4) illusory correlation (when people falsely perceive an association between two events or situations).

A series of psychological experiments in the 1960s suggested that people are biased toward confirming their existing beliefs. Later work re-interpreted these results as a tendency to test ideas in a one-sided way, focusing on one possibility and ignoring alternatives (myside bias, an alternative name for confirmation bias). In general, current explanations for the observed biases reveal the limited human capacity to process the complete set of information available, leading to a failure to investigate in a neutral, scientific way.

Flawed decisions due to confirmation bias have been found in political, organizational, financial and scientific contexts. These biases contribute to overconfidence in personal beliefs and can maintain or strengthen beliefs in the face of contrary evidence. For example, confirmation bias produces systematic errors in scientific research based on inductive reasoning (the gradual accumulation of supportive evidence). Similarly, a police detective may identify a suspect early in an investigation, but then may only seek confirming rather than disconfirming evidence. A medical practitioner may prematurely focus on a particular disorder early in a diagnostic session, and then seek only confirming evidence. In social media, confirmation bias is amplified by the use of filter bubbles, or “algorithmic editing”, which display to individuals only information they are likely to agree with, while excluding opposing views.

Contents

Definition and context

Confirmation bias, a phrase coined by English psychologist Peter Wason, is the tendency of people to favor information that confirms or strengthens their beliefs or values, and is difficult to dislodge once affirmed.[2] Confirmation bias is an example of a cognitive bias.

Confirmation bias (or confirmatory bias) has also been termed myside bias.[Note 1] “Congeniality bias” has also been used.[3]

Confirmation biases are effects in information processing. They differ from what is sometimes called the behavioral confirmation effect, commonly known as self-fulfilling prophecy, in which a person’s expectations influence their own behavior, bringing about the expected result.[4]

Some psychologists restrict the term “confirmation bias” to selective collection of evidence that supports what one already believes while ignoring or rejecting evidence that supports a different conclusion. Others apply the term more broadly to the tendency to preserve one’s existing beliefs when searching for evidence, interpreting it, or recalling it from memory.[5][Note 2]

Confirmation bias is a result of automatic, unintentional strategies rather than deliberate deception.[6][7] Confirmation bias cannot be avoided or eliminated entirely, but only managed by improving education and critical thinking skills.

Confirmation bias is a broad construct that has a number of possible explanations, namely: hypothesis-testing by falsification, hypothesis testing by positive test strategy, and information processing explanations.

Types of confirmation bias

Biased search for information

Confirmation bias has been described as an internal “yes man“, echoing back a person’s beliefs like Charles Dickens‘ character Uriah Heep[8]

Experiments have found repeatedly that people tend to test hypotheses in a one-sided way, by searching for evidence consistent with their current hypothesis.[1]:177–78[9] Rather than searching through all the relevant evidence, they phrase questions to receive an affirmative answer that supports their theory.[10] They look for the consequences that they would expect if their hypothesis were true, rather than what would happen if they were false.[10] For example, someone using yes/no questions to find a number they suspect to be the number 3 might ask, “Is it an odd number?” People prefer this type of question, called a “positive test”, even when a negative test such as “Is it an even number?” would yield exactly the same information.[11] However, this does not mean that people seek tests that guarantee a positive answer. In studies where subjects could select either such pseudo-tests or genuinely diagnostic ones, they favored the genuinely diagnostic.[12][13]

The preference for positive tests in itself is not a bias, since positive tests can be highly informative.[14] However, in combination with other effects, this strategy can confirm existing beliefs or assumptions, independently of whether they are true.[6] In real-world situations, evidence is often complex and mixed. For example, various contradictory ideas about someone could each be supported by concentrating on one aspect of his or her behavior.[9] Thus any search for evidence in favor of a hypothesis is likely to succeed.[6] One illustration of this is the way the phrasing of a question can significantly change the answer.[9] For example, people who are asked, “Are you happy with your social life?” report greater satisfaction than those asked, “Are you unhappy with your social life?”[15]

Even a small change in a question’s wording can affect how people search through available information, and hence the conclusions they reach. This was shown using a fictional child custody case.[16] Participants read that Parent A was moderately suitable to be the guardian in multiple ways. Parent B had a mix of salient positive and negative qualities: a close relationship with the child but a job that would take them away for long periods of time. When asked, “Which parent should have custody of the child?” the majority of participants chose Parent B, looking mainly for positive attributes. However, when asked, “Which parent should be denied custody of the child?” they looked for negative attributes and the majority answered that Parent B should be denied custody, implying that Parent A should have custody.[16]

Similar studies have demonstrated how people engage in a biased search for information, but also that this phenomenon may be limited by a preference for genuine diagnostic tests. In an initial experiment, participants rated another person on the introversion–extroversion personality dimension on the basis of an interview. They chose the interview questions from a given list. When the interviewee was introduced as an introvert, the participants chose questions that presumed introversion, such as, “What do you find unpleasant about noisy parties?” When the interviewee was described as extroverted, almost all the questions presumed extroversion, such as, “What would you do to liven up a dull party?” These loaded questions gave the interviewees little or no opportunity to falsify the hypothesis about them.[17] A later version of the experiment gave the participants less presumptive questions to choose from, such as, “Do you shy away from social interactions?”[18] Participants preferred to ask these more diagnostic questions, showing only a weak bias towards positive tests. This pattern, of a main preference for diagnostic tests and a weaker preference for positive tests, has been replicated in other studies.[18]

Personality traits influence and interact with biased search processes.[19] Individuals vary in their abilities to defend their attitudes from external attacks in relation to selective exposure. Selective exposure occurs when individuals search for information that is consistent, rather than inconsistent, with their personal beliefs.[20] An experiment examined the extent to which individuals could refute arguments that contradicted their personal beliefs.[19] People with high confidence levels more readily seek out contradictory information to their personal position to form an argument. Individuals with low confidence levels do not seek out contradictory information and prefer information that supports their personal position. People generate and evaluate evidence in arguments that are biased towards their own beliefs and opinions.[21] Heightened confidence levels decrease preference for information that supports individuals’ personal beliefs.

Another experiment gave participants a complex rule-discovery task that involved moving objects simulated by a computer.[22] Objects on the computer screen followed specific laws, which the participants had to figure out. So, participants could “fire” objects across the screen to test their hypotheses. Despite making many attempts over a ten-hour session, none of the participants figured out the rules of the system. They typically attempted to confirm rather than falsify their hypotheses, and were reluctant to consider alternatives. Even after seeing objective evidence that refuted their working hypotheses, they frequently continued doing the same tests. Some of the participants were taught proper hypothesis-testing, but these instructions had almost no effect.[22]

Biased interpretation of information

Smart people believe weird things because they are skilled at defending beliefs they arrived at for non-smart reasons.

Michael Shermer[23]

Confirmation biases are not limited to the collection of evidence. Even if two individuals have the same information, the way they interpret it can be biased.

A team at Stanford University conducted an experiment involving participants who felt strongly about capital punishment, with half in favor and half against it.[24][25] Each participant read descriptions of two studies: a comparison of U.S. states with and without the death penalty, and a comparison of murder rates in a state before and after the introduction of the death penalty. After reading a quick description of each study, the participants were asked whether their opinions had changed. Then, they read a more detailed account of each study’s procedure and had to rate whether the research was well-conducted and convincing.[24] In fact, the studies were fictional. Half the participants were told that one kind of study supported the deterrent effect and the other undermined it, while for other participants the conclusions were swapped.[24][25]

The participants, whether supporters or opponents, reported shifting their attitudes slightly in the direction of the first study they read. Once they read the more detailed descriptions of the two studies, they almost all returned to their original belief regardless of the evidence provided, pointing to details that supported their viewpoint and disregarding anything contrary. Participants described studies supporting their pre-existing view as superior to those that contradicted it, in detailed and specific ways.[24][26] Writing about a study that seemed to undermine the deterrence effect, a death penalty proponent wrote, “The research didn’t cover a long enough period of time,” while an opponent’s comment on the same study said, “No strong evidence to contradict the researchers has been presented.”[24] The results illustrated that people set higher standards of evidence for hypotheses that go against their current expectations. This effect, known as “disconfirmation bias”, has been supported by other experiments.[27]

Another study of biased interpretation occurred during the 2004 U.S. presidential election and involved participants who reported having strong feelings about the candidates. They were shown apparently contradictory pairs of statements, either from Republican candidate George W. Bush, Democratic candidate John Kerry or a politically neutral public figure. They were also given further statements that made the apparent contradiction seem reasonable. From these three pieces of information, they had to decide whether or not each individual’s statements were inconsistent.[28]:1948 There were strong differences in these evaluations, with participants much more likely to interpret statements from the candidate they opposed as contradictory.[28]:1951An MRI scanner allowed researchers to examine how the human brain deals with dissonant information

In this experiment, the participants made their judgments while in a magnetic resonance imaging (MRI) scanner which monitored their brain activity. As participants evaluated contradictory statements by their favored candidate, emotional centers of their brains were aroused. This did not happen with the statements by the other figures. The experimenters inferred that the different responses to the statements were not due to passive reasoning errors. Instead, the participants were actively reducing the cognitive dissonance induced by reading about their favored candidate’s irrational or hypocritical behavior.[28]:1956

Biases in belief interpretation are persistent, regardless of intelligence level. Participants in an experiment took the SAT test (a college admissions test used in the United States) to assess their intelligence levels. They then read information regarding safety concerns for vehicles, and the experimenters manipulated the national origin of the car. American participants provided their opinion if the car should be banned on a six-point scale, where one indicated “definitely yes” and six indicated “definitely no”. Participants firstly evaluated if they would allow a dangerous German car on American streets and a dangerous American car on German streets. Participants believed that the dangerous German car on American streets should be banned more quickly than the dangerous American car on German streets. There was no difference among intelligence levels at the rate participants would ban a car.[21]

Biased interpretation is not restricted to emotionally significant topics. In another experiment, participants were told a story about a theft. They had to rate the evidential importance of statements arguing either for or against a particular character being responsible. When they hypothesized that character’s guilt, they rated statements supporting that hypothesis as more important than conflicting statements.[29]

Biased memory recall of information

People may remember evidence selectively to reinforce their expectations, even if they gather and interpret evidence in a neutral manner. This effect is called “selective recall”, “confirmatory memory”, or “access-biased memory”.[30] Psychological theories differ in their predictions about selective recall. Schema theory predicts that information matching prior expectations will be more easily stored and recalled than information that does not match.[31] Some alternative approaches say that surprising information stands out and so is memorable.[31] Predictions from both these theories have been confirmed in different experimental contexts, with no theory winning outright.[32]

In one study, participants read a profile of a woman which described a mix of introverted and extroverted behaviors.[33] They later had to recall examples of her introversion and extroversion. One group was told this was to assess the woman for a job as a librarian, while a second group were told it was for a job in real estate sales. There was a significant difference between what these two groups recalled, with the “librarian” group recalling more examples of introversion and the “sales” groups recalling more extroverted behavior.[33] A selective memory effect has also been shown in experiments that manipulate the desirability of personality types.[31][34] In one of these, a group of participants were shown evidence that extroverted people are more successful than introverts. Another group were told the opposite. In a subsequent, apparently unrelated study, participants were asked to recall events from their lives in which they had been either introverted or extroverted. Each group of participants provided more memories connecting themselves with the more desirable personality type, and recalled those memories more quickly.[35]

Changes in emotional states can also influence memory recall.[36][37] Participants rated how they felt when they had first learned that O.J. Simpson had been acquitted of murder charges.[36] They described their emotional reactions and confidence regarding the verdict one week, two months, and one year after the trial. Results indicated that participants’ assessments for Simpson’s guilt changed over time. The more that participants’ opinion of the verdict had changed, the less stable were the participant’s memories regarding their initial emotional reactions. When participants recalled their initial emotional reactions two months and a year later, past appraisals closely resembled current appraisals of emotion. People demonstrate sizable myside bias when discussing their opinions on controversial topics.[21] Memory recall and construction of experiences undergo revision in relation to corresponding emotional states.

Myside bias has been shown to influence the accuracy of memory recall.[37] In an experiment, widows and widowers rated the intensity of their experienced grief six months and five years after the deaths of their spouses. Participants noted a higher experience of grief at six months rather than at five years. Yet, when the participants were asked after five years how they had felt six months after the death of their significant other, the intensity of grief participants recalled was highly correlated with their current level of grief. Individuals appear to utilize their current emotional states to analyze how they must have felt when experiencing past events.[36] Emotional memories are reconstructed by current emotional states.

One study showed how selective memory can maintain belief in extrasensory perception (ESP).[38] Believers and disbelievers were each shown descriptions of ESP experiments. Half of each group were told that the experimental results supported the existence of ESP, while the others were told they did not. In a subsequent test, participants recalled the material accurately, apart from believers who had read the non-supportive evidence. This group remembered significantly less information and some of them incorrectly remembered the results as supporting ESP.[38]

Individual differences

Myside bias was once believed to be correlated with intelligence; however, studies have shown that myside bias can be more influenced by ability to rationally think as opposed to level of intelligence.[21] Myside bias can cause an inability to effectively and logically evaluate the opposite side of an argument. Studies have stated that myside bias is an absence of “active open-mindedness”, meaning the active search for why an initial idea may be wrong.[39] Typically, myside bias is operationalized in empirical studies as the quantity of evidence used in support of their side in comparison to the opposite side.[40]

A study has found individual differences in myside bias. This study investigates individual differences that are acquired through learning in a cultural context and are mutable. The researcher found important individual difference in argumentation. Studies have suggested that individual differences such as deductive reasoning ability, ability to overcome belief bias, epistemological understanding, and thinking disposition are significant predictors of the reasoning and generating arguments, counterarguments, and rebuttals.[41][42][43]

A study by Christopher Wolfe and Anne Britt also investigated how participants’ views of “what makes a good argument?” can be a source of myside bias that influences the way a person formulates their own arguments.[40] The study investigated individual differences of argumentation schema and asked participants to write essays. The participants were randomly assigned to write essays either for or against their preferred side of an argument and were given research instructions that took either a balanced or an unrestricted approach. The balanced-research instructions directed participants to create a “balanced” argument, i.e., that included both pros and cons; the unrestricted-research instructions included nothing on how to create the argument.[40]

Overall, the results revealed that the balanced-research instructions significantly increased the incidence of opposing information in arguments. These data also reveal that personal belief is not a source of myside bias; however, that those participants, who believe that a good argument is one that is based on facts, are more likely to exhibit myside bias than other participants. This evidence is consistent with the claims proposed in Baron’s article—that people’s opinions about what makes good thinking can influence how arguments are generated.[40]

Discovery

Informal observations

Francis Bacon

Before psychological research on confirmation bias, the phenomenon had been observed throughout history. Beginning with the Greek historian Thucydides (c. 460 BC – c. 395 BC), who wrote of misguided reason in The Peloponnesian War; “… for it is a habit of mankind to entrust to careless hope what they long for, and to use sovereign reason to thrust aside what they do not fancy”.[44] Italian poet Dante Alighieri (1265–1321) noted it in the Divine Comedy, in which St. Thomas Aquinas cautions Dante upon meeting in Paradise, “opinion—hasty—often can incline to the wrong side, and then affection for one’s own opinion binds, confines the mind”.[45] Ibn Khaldun noticed the same effect in his Muqaddimah:[46]

Untruth naturally afflicts historical information. There are various reasons that make this unavoidable. One of them is partisanship for opinions and schools. […] if the soul is infected with partisanship for a particular opinion or sect, it accepts without a moment’s hesitation the information that is agreeable to it. Prejudice and partisanship obscure the critical faculty and preclude critical investigation. The result is that falsehoods are accepted and transmitted.

In the Novum Organum, English philosopher and scientist Francis Bacon (1561–1626)[47] noted that biased assessment of evidence drove “all superstitions, whether in astrology, dreams, omens, divine judgments or the like”.[48] He wrote:[48]

The human understanding when it has once adopted an opinion … draws all things else to support and agree with it. And though there be a greater number and weight of instances to be found on the other side, yet these it either neglects or despises, or else by some distinction sets aside or rejects[.]

In the second volume of his The World as Will and Representation (1844), German philosopher Arthur Schopenhauer observed that “An adopted hypothesis gives us lynx-eyes for everything that confirms it and makes us blind to everything that contradicts it.”[49] In his essay (1897) “What Is Art?“, Russian novelist Leo Tolstoy wrote:[50]

I know that most men—not only those considered clever, but even those who are very clever, and capable of understanding most difficult scientific, mathematical, or philosophic problems—can very seldom discern even the simplest and most obvious truth if it be such as to oblige them to admit the falsity of conclusions they have formed, perhaps with much difficulty—conclusions of which they are proud, which they have taught to others, and on which they have built their lives.

Hypothesis-testing (falsification) explanation (Wason)

In Peter Wason’s initial experiment published in 1960 (which does not mention the term “confirmation bias”), he repeatedly challenged participants to identify a rule applying to triples of numbers. They were told that (2,4,6) fits the rule. They generated triples, and the experimenter told them whether or not each triple conformed to the rule.[1]:179

The actual rule was simply “any ascending sequence”, but participants had great difficulty in finding it, often announcing rules that were far more specific, such as “the middle number is the average of the first and last”.[51] The participants seemed to test only positive examples—triples that obeyed their hypothesized rule. For example, if they thought the rule was, “Each number is two greater than its predecessor,” they would offer a triple that fitted (confirmed) this rule, such as (11,13,15) rather than a triple that violated (falsified) it, such as (11,12,19).[52]

Wason interpreted his results as showing a preference for confirmation over falsification, hence he coined the term “confirmation bias”.[Note 3][53] Wason also used confirmation bias to explain the results of his selection task experiment.[54] Participants repeatedly performed badly on various forms of this test, in most cases ignoring information that could potentially refute (falsify) the specified rule.[55][56]

Hypothesis testing (positive test strategy) explanation (Klayman and Ha)

Klayman and Ha’s 1987 paper argues that the Wason experiments do not actually demonstrate a bias towards confirmation, but instead a tendency to make tests consistent with the working hypothesis.[14][57] They called this the “positive test strategy”.[9] This strategy is an example of a heuristic: a reasoning shortcut that is imperfect but easy to compute.[58] Klayman and Ha used Bayesian probability and information theory as their standard of hypothesis-testing, rather than the falsificationism used by Wason. According to these ideas, each answer to a question yields a different amount of information, which depends on the person’s prior beliefs. Thus a scientific test of a hypothesis is one that is expected to produce the most information. Since the information content depends on initial probabilities, a positive test can either be highly informative or uninformative. Klayman and Ha argued that when people think about realistic problems, they are looking for a specific answer with a small initial probability. In this case, positive tests are usually more informative than negative tests.[14] However, in Wason’s rule discovery task the answer—three numbers in ascending order—is very broad, so positive tests are unlikely to yield informative answers. Klayman and Ha supported their analysis by citing an experiment that used the labels “DAX” and “MED” in place of “fits the rule” and “doesn’t fit the rule”. This avoided implying that the aim was to find a low-probability rule. Participants had much more success with this version of the experiment.[59][60]

If the true rule (T) encompasses the current hypothesis (H), then positive tests (examining an H to see if it is T) will not show that the hypothesis is false.If the true rule (T) overlaps the current hypothesis (H), then either a negative test or a positive test can potentially falsify H.When the working hypothesis (H) includes the true rule (T) then positive tests are the only way to falsify H.

In light of this and other critiques, the focus of research moved away from confirmation versus falsification of an hypothesis, to examining whether people test hypotheses in an informative way, or an uninformative but positive way. The search for “true” confirmation bias led psychologists to look at a wider range of effects in how people process information.[61]

Information processing explanations

There are currently three main information processing explanations of confirmation bias, plus a recent addition.

Cognitive versus motivational

Happy events are more likely to be remembered.

According to Robert MacCoun, most biased evidence processing occurs through a combination of “cold” (cognitive) and “hot” (motivated) mechanisms.[62]

Cognitive explanations for confirmation bias are based on limitations in people’s ability to handle complex tasks, and the shortcuts, called heuristics, that they use.[63] For example, people may judge the reliability of evidence by using the availability heuristic that is, how readily a particular idea comes to mind.[64] It is also possible that people can only focus on one thought at a time, so find it difficult to test alternative hypotheses in parallel.[1]:198–99 Another heuristic is the positive test strategy identified by Klayman and Ha, in which people test a hypothesis by examining cases where they expect a property or event to occur. This heuristic avoids the difficult or impossible task of working out how diagnostic each possible question will be. However, it is not universally reliable, so people can overlook challenges to their existing beliefs.[14][1]:200

Motivational explanations involve an effect of desire on belief.[1]:197[65] It is known that people prefer positive thoughts over negative ones in a number of ways: this is called the “Pollyanna principle“.[66] Applied to arguments or sources of evidence, this could explain why desired conclusions are more likely to be believed true. According to experiments that manipulate the desirability of the conclusion, people demand a high standard of evidence for unpalatable ideas and a low standard for preferred ideas. In other words, they ask, “Can I believe this?” for some suggestions and, “Must I believe this?” for others.[67][68] Although consistency is a desirable feature of attitudes, an excessive drive for consistency is another potential source of bias because it may prevent people from neutrally evaluating new, surprising information. Social psychologist Ziva Kunda combines the cognitive and motivational theories, arguing that motivation creates the bias, but cognitive factors determine the size of the effect.[1]:198

Cost-benefit

Explanations in terms of cost-benefit analysis assume that people do not just test hypotheses in a disinterested way, but assess the costs of different errors.[69] Using ideas from evolutionary psychology, James Friedrich suggests that people do not primarily aim at truth in testing hypotheses, but try to avoid the most costly errors. For example, employers might ask one-sided questions in job interviews because they are focused on weeding out unsuitable candidates.[70] Yaacov Trope and Akiva Liberman’s refinement of this theory assumes that people compare the two different kinds of error: accepting a false hypothesis or rejecting a true hypothesis. For instance, someone who underestimates a friend’s honesty might treat him or her suspiciously and so undermine the friendship. Overestimating the friend’s honesty may also be costly, but less so. In this case, it would be rational to seek, evaluate or remember evidence of their honesty in a biased way.[71] When someone gives an initial impression of being introverted or extroverted, questions that match that impression come across as more empathic.[72] This suggests that when talking to someone who seems to be an introvert, it is a sign of better social skills to ask, “Do you feel awkward in social situations?” rather than, “Do you like noisy parties?” The connection between confirmation bias and social skills was corroborated by a study of how college students get to know other people. Highly self-monitoring students, who are more sensitive to their environment and to social norms, asked more matching questions when interviewing a high-status staff member than when getting to know fellow students.[72]

Exploratory versus confirmatory

Psychologists Jennifer Lerner and Philip Tetlock distinguish two different kinds of thinking process. Exploratory thought neutrally considers multiple points of view and tries to anticipate all possible objections to a particular position, while confirmatory thought seeks to justify a specific point of view. Lerner and Tetlock say that when people expect to justify their position to others whose views they already know, they will tend to adopt a similar position to those people, and then use confirmatory thought to bolster their own credibility. However, if the external parties are overly aggressive or critical, people will disengage from thought altogether, and simply assert their personal opinions without justification. Lerner and Tetlock say that people only push themselves to think critically and logically when they know in advance they will need to explain themselves to others who are well-informed, genuinely interested in the truth, and whose views they don’t already know. Because those conditions rarely exist, they argue, most people are using confirmatory thought most of the time.[73][74][75]

Make-believe

Developmental psychologist Eve Whitmore has argued that beliefs and biases involved in confirmation bias have their roots in childhood coping through make-believe, which becomes “the basis for more complex forms of self-deception and illusion into adulthood.” The friction brought on by questioning as an adolescent with developing critical thinking can lead to the rationalization of false beliefs, and the habit of such rationalization can become unconscious over the years.[76]

Real-world effects

Social media

In social media, confirmation bias is amplified by the use of filter bubbles, or “algorithmic editing”, which displays to individuals only information they are likely to agree with, while excluding opposing views.[77] Some have argued that confirmation bias is the reason why society can never escape from filter bubbles, because individuals are psychologically hardwired to seek information that agrees with their preexisting values and beliefs.[78] Others have further argued that the mixture of the two is degrading democracy—claiming that this “algorithmic editing” removes diverse viewpoints and information—and that unless filter bubble algorithms are removed, voters will be unable to make fully informed political decisions. [79][77]

The rise of social media has contributed greatly to the rapid spread of fake news, that is, false and misleading information that is presented as credible news from a seemingly reliable source. Confirmation bias (selecting or reinterpreting evidence to support one’s beliefs) is one of three main hurdles cited as to why critical thinking goes astray in these circumstances. The other two are shortcut heuristics (when overwhelmed or short of time, people rely on simple rules such as group consensus or trusting an expert or role model) and social goals (social motivation or peer pressure can interfere with objective analysis of facts at hand).[80]

In combating the spread of fake news, social media sites have considered turning toward “digital nudging”.[81] This can currently be done in two different forms of nudging. This includes nudging of information and nudging of presentation. Nudging of information entails social media sites providing a disclaimer or label questioning or warning users of the validity of the source while nudging of presentation includes exposing users to new information which they may not have sought out but could introduce them to viewpoints that may combat their own confirmation biases.[82]

Science and scientific research

A distinguishing feature of scientific thinking is the search for confirming or supportive evidence (inductive reasoning) as well as falsifying evidence (deductive reasoning). Inductive research in particular can have a serious problem with confirmation bias.[83][84]

Many times in the history of science, scientists have resisted new discoveries by selectively interpreting or ignoring unfavorable data.[1]:192–94 The assessment of the quality of scientific studies seems to be particularly vulnerable to confirmation bias. Several studies have shown that scientists rate studies that report findings consistent with their prior beliefs more favorably than studies reporting findings inconsistent with their previous beliefs.[7][85][86]

However, assuming that the research question is relevant, the experimental design adequate and the data are clearly and comprehensively described, the empirical data obtained should be important to the scientific community and should not be viewed prejudicially, regardless of whether they conform to current theoretical predictions.[86] In practice, researchers may misunderstand, misinterpret, or not read at all studies that contradict their preconceptions, or wrongly cite them anyway as if they actually supported their claims.[87]

Further, confirmation biases can sustain scientific theories or research programs in the face of inadequate or even contradictory evidence.[55][88] The discipline of parapsychology is often cited as an example in the context of whether it is a protoscience or a pseudoscience.[89]

An experimenter’s confirmation bias can potentially affect which data are reported. Data that conflict with the experimenter’s expectations may be more readily discarded as unreliable, producing the so-called file drawer effect. To combat this tendency, scientific training teaches ways to prevent bias.[90] For example, experimental design of randomized controlled trials (coupled with their systematic review) aims to minimize sources of bias.[90][91]

The social process of peer review aims to mitigate the effect of individual scientists’ biases, even though the peer review process itself may be susceptible to such biases [92][93][86][94][95] Confirmation bias may thus be especially harmful to objective evaluations regarding nonconforming results since biased individuals may regard opposing evidence to be weak in principle and give little serious thought to revising their beliefs.[85] Scientific innovators often meet with resistance from the scientific community, and research presenting controversial results frequently receives harsh peer review.[96]

Media and fact-checking

Mainstream media has in recent years come under severe economic threat from online startups. In addition the rapid spread of misinformation and conspiracy theories via social media is slowly creeping into mainstream media. One solution is for some media staff to be assigned a fact-checking role. Independent fact-checking organisations have also become prominent. However, the fact-checking of media reports and investigations is subject to the same confirmation bias as that for peer review of scientific research. This bias has been little studied so far. For example, a fact-checker with progressive political views might be more critical than necessary of a factual report from a conservative commentator. Another example is that facts are often explained with ambiguous words, so that progressives and conservatives may interpret the words differently according to their own beliefs.[97]

Finance

Confirmation bias can lead investors to be overconfident, ignoring evidence that their strategies will lose money.[8][98] In studies of political stock markets, investors made more profit when they resisted bias. For example, participants who interpreted a candidate’s debate performance in a neutral rather than partisan way were more likely to profit.[99] To combat the effect of confirmation bias, investors can try to adopt a contrary viewpoint “for the sake of argument”.[100] In one technique, they imagine that their investments have collapsed and ask themselves why this might happen.[8]

Medicine and health

Cognitive biases are important variables in clinical decision-making by medical general practitioners (GPs) and medical specialists. Two important ones are confirmation bias and the overlapping availability bias. A GP may make a diagnosis early on during an examination, and then seek confirming evidence rather than falsifying evidence. This cognitive error is partly caused by the availability of evidence about the supposed disorder being diagnosed. For example, the client may have mentioned the disorder, or the GP may have recently read a much-discussed paper about the disorder. The basis of this cognitive shortcut or heuristic (termed anchoring) is that the doctor does not consider multiple possibilities based on evidence, but prematurely latches on (or anchors to) a single cause.[101] In emergency medicine, because of time pressure, there is a high density of decision-making, and shortcuts are frequently applied. The potential failure rate of these cognitive decisions needs to be managed by education about the 30 or more cognitive biases that can occur, so as to set in place proper debiasing strategies.[102]

Raymond Nickerson, a psychologist, blames confirmation bias for the ineffective medical procedures that were used for centuries before the arrival of scientific medicine.[1]:192 If a patient recovered, medical authorities counted the treatment as successful, rather than looking for alternative explanations such as that the disease had run its natural course. Biased assimilation is a factor in the modern appeal of alternative medicine, whose proponents are swayed by positive anecdotal evidence but treat scientific evidence hyper-critically.[103][104][105] Confirmation bias may also cause doctors to perform unnecessary medical procedures due to pressure from adamant patients.[106]

Cognitive therapy was developed by Aaron T. Beck in the early 1960s and has become a popular approach.[107] According to Beck, biased information processing is a factor in depression.[108] His approach teaches people to treat evidence impartially, rather than selectively reinforcing negative outlooks.[47] Phobias and hypochondria have also been shown to involve confirmation bias for threatening information.[109]

Politics, law and policing

Mock trials allow researchers to examine confirmation biases in a realistic setting

Nickerson argues that reasoning in judicial and political contexts is sometimes subconsciously biased, favoring conclusions that judges, juries or governments have already committed to.[1]:191–93 Since the evidence in a jury trial can be complex, and jurors often reach decisions about the verdict early on, it is reasonable to expect an attitude polarization effect. The prediction that jurors will become more extreme in their views as they see more evidence has been borne out in experiments with mock trials.[110][111] Both inquisitorial and adversarial criminal justice systems are affected by confirmation bias.[112]

Confirmation bias can be a factor in creating or extending conflicts, from emotionally charged debates to wars: by interpreting the evidence in their favor, each opposing party can become overconfident that it is in the stronger position.[113] On the other hand, confirmation bias can result in people ignoring or misinterpreting the signs of an imminent or incipient conflict. For example, psychologists Stuart Sutherland and Thomas Kida have each argued that U.S. Navy Admiral Husband E. Kimmel showed confirmation bias when playing down the first signs of the Japanese attack on Pearl Harbor.[55][114]

A two-decade study of political pundits by Philip E. Tetlock found that, on the whole, their predictions were not much better than chance. Tetlock divided experts into “foxes” who maintained multiple hypotheses, and “hedgehogs” who were more dogmatic. In general, the hedgehogs were much less accurate. Tetlock blamed their failure on confirmation bias, and specifically on their inability to make use of new information that contradicted their existing theories.[115]

In police investigations, a detective may identify a suspect early in an investigation, but then sometimes largely seek supporting or confirming evidence, ignoring or downplaying falsifying evidence.[116]

Social psychology

Social psychologists have identified two tendencies in the way people seek or interpret information about themselves. Self-verification is the drive to reinforce the existing self-image and self-enhancement is the drive to seek positive feedback. Both are served by confirmation biases.[117] In experiments where people are given feedback that conflicts with their self-image, they are less likely to attend to it or remember it than when given self-verifying feedback.[118][119][120] They reduce the impact of such information by interpreting it as unreliable.[118][121][122] Similar experiments have found a preference for positive feedback, and the people who give it, over negative feedback.[117]

Mass delusions

Confirmation bias can play a key role in the propagation of mass delusionsWitch trials are frequently cited as an example.[123][124]

For another example, in the Seattle windshield pitting epidemic, there seemed to be a “pitting epidemic” in which windshields were damaged due to an unknown cause. As news of the apparent wave of damage spread, more and more people checked their windshields, discovered that their windshields too had been damaged, thus confirming belief in the supposed epidemic. In fact, the windshields were previously damaged, but the damage went unnoticed until people checked their windshields as the delusion spread.[125]

Paranormal beliefs

One factor in the appeal of alleged psychic readings is that listeners apply a confirmation bias which fits the psychic’s statements to their own lives.[126] By making a large number of ambiguous statements in each sitting, the psychic gives the client more opportunities to find a match. This is one of the techniques of cold reading, with which a psychic can deliver a subjectively impressive reading without any prior information about the client.[126] Investigator James Randi compared the transcript of a reading to the client’s report of what the psychic had said, and found that the client showed a strong selective recall of the “hits”.[127]

As a striking illustration of confirmation bias in the real world, Nickerson mentions numerological pyramidology: the practice of finding meaning in the proportions of the Egyptian pyramids.[1]:190 There are many different length measurements that can be made of, for example, the Great Pyramid of Giza and many ways to combine or manipulate them. Hence it is almost inevitable that people who look at these numbers selectively will find superficially impressive correspondences, for example with the dimensions of the Earth.[1]:190

Recruitment and selection

Unconscious cognitive bias (including confirmation bias) in job recruitment affects hiring decisions and can potentially prohibit a diverse and inclusive workplace. There are a variety of unconscious biases that affects recruitment decisions but confirmation bias is one of the major ones, especially during the interview stage.[128] The interviewer will often select a candidate that confirms their own beliefs, even though other candidates are equally or better qualified.

Associated effects and outcomes

Polarization of opinion

Main article: Attitude polarization

When people with opposing views interpret new information in a biased way, their views can move even further apart. This is called “attitude polarization”.[129] The effect was demonstrated by an experiment that involved drawing a series of red and black balls from one of two concealed “bingo baskets”. Participants knew that one basket contained 60 percent black and 40 percent red balls; the other, 40 percent black and 60 percent red. The experimenters looked at what happened when balls of alternating color were drawn in turn, a sequence that does not favor either basket. After each ball was drawn, participants in one group were asked to state out loud their judgments of the probability that the balls were being drawn from one or the other basket. These participants tended to grow more confident with each successive draw—whether they initially thought the basket with 60 percent black balls or the one with 60 percent red balls was the more likely source, their estimate of the probability increased. Another group of participants were asked to state probability estimates only at the end of a sequence of drawn balls, rather than after each ball. They did not show the polarization effect, suggesting that it does not necessarily occur when people simply hold opposing positions, but rather when they openly commit to them.[130]

A less abstract study was the Stanford biased interpretation experiment, in which participants with strong opinions about the death penalty read about mixed experimental evidence. Twenty-three percent of the participants reported that their views had become more extreme, and this self-reported shift correlated strongly with their initial attitudes.[24] In later experiments, participants also reported their opinions becoming more extreme in response to ambiguous information. However, comparisons of their attitudes before and after the new evidence showed no significant change, suggesting that the self-reported changes might not be real.[27][129][131] Based on these experiments, Deanna Kuhn and Joseph Lao concluded that polarization is a real phenomenon but far from inevitable, only happening in a small minority of cases, and it was prompted not only by considering mixed evidence, but by merely thinking about the topic.[129]

Charles Taber and Milton Lodge argued that the Stanford team’s result had been hard to replicate because the arguments used in later experiments were too abstract or confusing to evoke an emotional response. The Taber and Lodge study used the emotionally charged topics of gun control and affirmative action.[27] They measured the attitudes of their participants towards these issues before and after reading arguments on each side of the debate. Two groups of participants showed attitude polarization: those with strong prior opinions and those who were politically knowledgeable. In part of this study, participants chose which information sources to read, from a list prepared by the experimenters. For example, they could read the National Rifle Association‘s and the Brady Anti-Handgun Coalition‘s arguments on gun control. Even when instructed to be even-handed, participants were more likely to read arguments that supported their existing attitudes than arguments that did not. This biased search for information correlated well with the polarization effect.[27]

The backfire effect is a name for the finding that given evidence against their beliefs, people can reject the evidence and believe even more strongly.[132][133] The phrase was coined by Brendan Nyhan and Jason Reifler in 2010.[134] However, subsequent research has since failed to replicate findings supporting the backfire effect.[135] One study conducted out of the Ohio State University and George Washington University studied 10,100 participants with 52 different issues expected to trigger a backfire effect. While the findings did conclude that individuals are reluctant to embrace facts that contradict their already held ideology, no cases of backfire were detected.[136] The backfire effect has since been noted to be a rare phenomenon rather than a common occurrence[137] (compare the boomerang effect).

Persistence of discredited beliefs

Main article: Belief perseverance[B]eliefs can survive potent logical or empirical challenges. They can survive and even be bolstered by evidence that most uncommitted observers would agree logically demands some weakening of such beliefs. They can even survive the total destruction of their original evidential bases.

—Lee Ross and Craig Anderson[138]

Confirmation biases provide one plausible explanation for the persistence of beliefs when the initial evidence for them is removed or when they have been sharply contradicted.[1]:187 This belief perseverance effect has been first demonstrated experimentally by Festinger, Riecken, and Schachter. These psychologists spent time with a cult whose members were convinced that the world would end on December 21, 1954. After the prediction failed, most believers still clung to their faith. Their book describing this research is aptly named When Prophecy Fails.[139]

The term “belief perseverance,” however, was coined in a series of experiments using what is called the “debriefing paradigm”: participants read fake evidence for a hypothesis, their attitude change is measured, then the fakery is exposed in detail. Their attitudes are then measured once more to see if their belief returns to its previous level.[138]

A common finding is that at least some of the initial belief remains even after a full debriefing.[140] In one experiment, participants had to distinguish between real and fake suicide notes. The feedback was random: some were told they had done well while others were told they had performed badly. Even after being fully debriefed, participants were still influenced by the feedback. They still thought they were better or worse than average at that kind of task, depending on what they had initially been told.[141]

In another study, participants read job performance ratings of two firefighters, along with their responses to a risk aversion test.[138] This fictional data was arranged to show either a negative or positive association: some participants were told that a risk-taking firefighter did better, while others were told they did less well than a risk-averse colleague.[142] Even if these two case studies were true, they would have been scientifically poor evidence for a conclusion about firefighters in general. However, the participants found them subjectively persuasive.[142] When the case studies were shown to be fictional, participants’ belief in a link diminished, but around half of the original effect remained.[138] Follow-up interviews established that the participants had understood the debriefing and taken it seriously. Participants seemed to trust the debriefing, but regarded the discredited information as irrelevant to their personal belief.[142]

The continued influence effect is the tendency to believe previously learned misinformation even after it has been corrected. Misinformation can still influence inferences one generates after a correction has occurred.[143]

Preference for early information

Experiments have shown that information is weighted more strongly when it appears early in a series, even when the order is unimportant. For example, people form a more positive impression of someone described as “intelligent, industrious, impulsive, critical, stubborn, envious” than when they are given the same words in reverse order.[144] This irrational primacy effect is independent of the primacy effect in memory in which the earlier items in a series leave a stronger memory trace.[144] Biased interpretation offers an explanation for this effect: seeing the initial evidence, people form a working hypothesis that affects how they interpret the rest of the information.[1]:187

One demonstration of irrational primacy used colored chips supposedly drawn from two urns. Participants were told the color distributions of the urns, and had to estimate the probability of a chip being drawn from one of them.[144] In fact, the colors appeared in a prearranged order. The first thirty draws favored one urn and the next thirty favored the other.[1]:187 The series as a whole was neutral, so rationally, the two urns were equally likely. However, after sixty draws, participants favored the urn suggested by the initial thirty.[144]

Another experiment involved a slide show of a single object, seen as just a blur at first and in slightly better focus with each succeeding slide.[144] After each slide, participants had to state their best guess of what the object was. Participants whose early guesses were wrong persisted with those guesses, even when the picture was sufficiently in focus that the object was readily recognizable to other people.[1]:187

Illusory association between events

Main article: Illusory correlation

Illusory correlation is the tendency to see non-existent correlations in a set of data.[145] This tendency was first demonstrated in a series of experiments in the late 1960s.[146] In one experiment, participants read a set of psychiatric case studies, including responses to the Rorschach inkblot test. The participants reported that the homosexual men in the set were more likely to report seeing buttocks, anuses or sexually ambiguous figures in the inkblots. In fact the fictional case studies had been constructed so that the homosexual men were no more likely to report this imagery or, in one version of the experiment, were less likely to report it than heterosexual men.[145] In a survey, a group of experienced psychoanalysts reported the same set of illusory associations with homosexuality.[145][146]

Another study recorded the symptoms experienced by arthritic patients, along with weather conditions over a 15-month period. Nearly all the patients reported that their pains were correlated with weather conditions, although the real correlation was zero.[147]

DaysRainNo rain
Arthritis146
No arthritis72

This effect is a kind of biased interpretation, in that objectively neutral or unfavorable evidence is interpreted to support existing beliefs. It is also related to biases in hypothesis-testing behavior.[148] In judging whether two events, such as illness and bad weather, are correlated, people rely heavily on the number of positive-positive cases: in this example, instances of both pain and bad weather. They pay relatively little attention to the other kinds of observation (of no pain and/or good weather).[149] This parallels the reliance on positive tests in hypothesis testing.[148] It may also reflect selective recall, in that people may have a sense that two events are correlated because it is easier to recall times when they happened together.[148]

Posted in Uncategorized | Leave a comment

14-1-01064-1 | State vs Harris, Kirsten Michelle


Case Information

14-1-01064-1 | STATE OF WASHINGTON VS HARRIS, KIRSTEN MICHELLE

Case Number
14-1-01064-1

Court
Thurston

File Date
07/10/2014

Case Type
ADL Criminal Adult

Case Status
Completed/Re-Completed

Party

Plaintiff (Criminal)
STATE OF WASHINGTON, NFN


Defendant (WIP)
HARRIS, KIRSTEN MICHELLE

DOB
3/5/1971

Charges
HARRIS, KIRSTEN MICHELLE

  DescriptionStatuteLevelDate
1NON-CHARGE00.00.000Non Charge07/10/2014

Disposition Events

07/10/2014 Disposition

1NON-CHARGENon-Charge

Events and Hearings

  • 07/10/2014 Preliminary Appearance Calendar 3:30 PM 

Hearing Time
8:00 AM

Comment
PC

  • 07/10/2014 Case Resolution Uncontested Resolution
  • 07/10/2014 Note for Calendar 

Comment
-: NOTE FOR CALENDAR; 07-10-2014PR; PC;

  • 07/10/2014 PreTrial Report 

View Document PRE-TRIAL REPORT

Comment
1: PRE-TRIAL REPORT;

  • 07/10/2014 Affidavit of Probable Cause 

View Document AFFIDAVIT_DECLARATION PROB CAUSE

Comment
2: AFFIDAVIT/DECLARATION PROB CAUSE;

  • 07/10/2014 Preliminary Appearance 

View Document PRELIMINARY APPEARANCE ARRAIGNMENT JUDGE ANNE HIRSCH CC SHACKLEY CR BESWICK

Comment
3: PRELIMINARY APPEARANCE; 07-22-20142A; ARRAIGNMENT; JUDGE ANNE HIRSCH; CC SHACKLEY CR BESWICK;

  • 07/10/2014 Affidavit of Indigency 

Comment
4.99: AFFIDAVIT OF INDIGENCY;

  • 07/10/2014 Findings of Indigency 

View Document FINDING OF INDIGENCY

Comment
5: FINDING OF INDIGENCY;

  • 07/10/2014 Order Establishing Conditions of Release 

View Document ORDER ESTABLISHING COND. OF RELEASE

Comment
6: ORDER ESTABLISHING COND. OF RELEASE;

  • 07/10/2014 Order Determining Probable Cause 

View Document ORD DETERMIN PROBABLE CAUSE

Comment
7: ORD DETERMIN PROBABLE CAUSE;

  • 07/22/2014 Criminal Arraignment Calendar 10:00 AM 

Hearing Time
8:00 AM

Comment
ARRAIGNMENT

  • 07/22/2014 Hearing Cancelled Plaintiff Prosecution Requested 

View Document CANCELLED_ PLAINTIFF_PROS REQUESTED (HIRSCH) CC BALDERSTON

Comment
8: CANCELLED: PLAINTIFF/PROS REQUESTED; (HIRSCH) CC BALDERSTON;

Posted in Uncategorized | Leave a comment

Action for Alienation of Affections History


Although this is no longer a valid action in most of the 50 states, it is still a valid lawsuit with respect to children in Washington State jurisprudence.

64 Wn. App. 318, 824 P.2d 1225, WALLER v. STATE


Clues/Evidence

1. Your Former Spouse Tells Your Children Details of Your Divorce


While your former spouse may claim he or she wants to be honest and open with your children, there are some “grown-up” details that are not appropriate to share with them. If your spouse tells your children why you two got a divorce, including details of your conflict and actions you took to end the marriage, your children could paint a very negative picture of you. Your children could see you as responsible for the divorce and develop anger towards you.


2. Your Spouse Makes False Allegations of Domestic Violence


When telling your children details of your divorce, your former spouse may elaborate on actions you took that hurt him or her and made the divorce necessary. Some parents may make false accusations of abuse to further damage their former spouses’ reputations. If your children make references to past abusive behavior that you did not commit, parental alienation may be present.


3. Your Former Spouse Speaks Badly of You in Front of Your Children


Your former spouse may make certain comments that incite anger towards you in front of your children. He or she may say something such as, “We can’t have a family Christmas because your mother/father is spending time with his/her new friend” or similar statements to make your children cast anger and blame in your direction.


4. Your Former Spouse Uses Negative Body Language


If you and your former spouse are in the same room, your spouse may use body language to communicate his or her dislike towards you in front of your children. Crossed arms, rolling eyes, shaking heads, and angry faces are all forms of negative body language that could send a negative message about you to your children.


5. Your Children Are Angry with You


If your former spouse is saying bad things about you at home, these negative attitudes will come out when you spend time with your children. If your children say that they hate you, that they don’t like you, or talk about the reasons they don’t like you, parental alienation may be at play at your spouse’s home.


6. Your Children Feel Guilty After Spending Time with You


A parent who is engaging in parental alienation does not want his or her children to enjoy spending time with the other parent. If your children refrain from saying they had fun with you or liked being with you, your spouse may be making them feel guilty about having an attachment to you in private.


7. Your Former Spouse Pries About Your Private Life


After their visits, your former spouse may ask your children uncomfortable and inappropriate questions about your private life. The children may feel uncomfortable and conflicted since they want to be loyal to both of you. If your spouse suddenly knows details about your life you did not share with him or her, he or she could be using the children to gain that information.


8. Your Former Spouse Keeps Your Children Away From You


Your custody arrangement may say you can only see your children on certain days, but your spouse may sign them up for certain activities that take place on those days or your children may be unwilling to see you. The less time your children spend with you, the less of an emotional bond you can develop.


9. Your Former Spouse Gives Your Children Choices About Visits


Depending on your custody agreement, you may have scheduled visits with your children mandated by the court. Your former spouse may ask your children if they want to go visit you – even if they don’t have a choice. You may not see your children or when your children have their mandated visits, they may blame you for making them go.


10. Your Former Spouse Asks Your Children to Choose One Parent


Your children cannot choose between two parents, but your spouse may ask them to make that decision. Your children may experience considerable distress as a result, and experience resentment towards you if your spouse convinces them to choose him or her.


Parental alienation is an unfortunate reality for many divorced parents, but it does not have to be. If you believe your former spouse is alienating you from your children, speak to your Indianapolis parental alienation lawyer as soon as possible.


Can you sue someone for ruining your marriage?

The law allows individuals to sue others for ruining their marriages. While most states got rid of it years ago, it’s still on the books in Hawaii, Mississippi, New Mexico, North Carolina, South Dakota and Utah. … The law has since evolved, such that women can now sue.


(Just CLICK on the Link)

Alienation of Affections

Alienation of Affection Actions

Posted in Uncategorized | Leave a comment

3-19-21 Smith v Heather Lund 21CV0197: Lynch Mob Litigation


3-19-21 Smith v Lund 21CV0197

3-19-21 Smith v Lund 21CV0197


Sometime around August, 2020, Heather Lynn Lund attempted to garner the services of Amicus Curia to produce perjured sworn court documents to reverse/rescind the 5-year protection order against her husband after he was convicted of domestic violence against her following a train of other DV complaints involving the duo, based on Heather’s sworn statements to the arresting LEO. Her efforts were delicately (at least initially) rejected by yours truly, an independent investigative photojournalist member of the 5th Estate and paralegal.


What followed was literally months of increasingly threatening posts by Heather on the Mason County Blog, coercion, e-mail, and thinly disguised online communications unsuccessfully attempting to obscure her identity to evade accountability for her stalking and harassment. Despite blocking her using all technical procedures available, her obsession only grew in frequency and escalation of threats, finally resorting to attempts to organize mob violence because the reporter refused to allow Heather Lund to control editorial policy by deleting reference to her attempted perjury, manipulation of the courts, and abuse of its clerks.


Very reluctantly, after months of forbearance, this action was filed and the judge ultimately granted the Petition for protection against stalking and harassment inasmuch as Heather would not appear in court. after having been properly served–something she naively thought she could evade through her use of the internet and sale of her home.


This video of the courtroom proceedings confirms the granting of the petition and the entry of a protection order from stalking and harassment against Heather Lynn Lund, INCLUDING use of social media/the internet to continue her harassment/stalking.


Karens, be ye forewarned! The internet is a paper thin shield against litigation seeking protection prohibiting abusive stalking and harassment.


ADJUDICATION


3-19-21 Protection Order (21CV0197)

Smith vs. Lund

Posted in Uncategorized | Leave a comment

Collateral Estoppel: Nonmutual Issue Preclusion


REDUCING THE UNFAIR EFFECTS OF NONMUTUAL
ISSUE PRECLUSION THROUGH DAMAGES LIMITS
by Steven P. Nonkest


Collateral Estoppel Essay

Posted in Uncategorized | Leave a comment

Kung Fu Hustle Axe Gang Dance Choregraphy


ANTIFA & Cops Choreography

Posted in Uncategorized | Leave a comment

WA DOL Vehicle Records Request Form


WA DOL Rec Req

WA DOL Vehicle Records Request form download


WA DOL Records Site

Posted in Uncategorized | Leave a comment

John Dean Warns Trump Prosecutors Are Closing In


by Ed Mazza (3-11-21) —


Donald Trump’s former lawyer and fixer, Michael Cohen, said he would go in for a seventh interview on Wednesday with the Manhattan district attorney’s office pursuing a criminal investigation into the former U.S. president.

John Dean, the White House counsel to President Richard M. Nixon who was once dubbed the “master manipulator” of the Watergate scandal, says he knows legal trouble and former President Donald Trump is in it deep.

Dean shared a report on former Trump personal attorney Michael Cohen, who has been meeting with the Manhattan district attorney’s office, which is investigating Trump for potential fraud, including tax fraud. It was Cohen’s seventh meeting with the DA’s office.

Dean pointed out on Twitter the significance of all those meetings:

Dean shared a report on former Trump personal attorney Michael Cohen, who has been meeting with the Manhattan district attorney’s office, which is investigating Trump for potential fraud, including tax fraud. It was Cohen’s seventh meeting with the DA’s office.

Dean pointed out on Twitter the significance of all those meetings:

“From personal experience as a key witness I assure you that you do not visit a prosecutor’s office 7 times if they are not planning to indict those about whom you have knowledge. It is only a matter of how many days until DA Vance indicts Donald & Co.”

The DA’s office last month obtained years of tax data from Trump’s accounting firm after the Supreme Court ruled against his efforts to block that access. In an interview with Reuters, Cohen called those files the “holy grail.”

Cohen was Trump’s longtime personal attorney and fixer before turning on him in 2018 and testifying before Congress. He also pleaded guilty to lying to Congress and campaign finance violations for arranging the hush-money payments from Trump to porn star Stormy Daniels and former Playboy model Karen McDougal.


Cohen was sentenced to three years in prison but was released into home confinement last year due to the coronavirus pandemic.


 

Posted in Uncategorized | Leave a comment

Déjà vu: Now you really CAN take it with you!


Chatbots that resurrect the dead: legal experts weigh in on ‘disturbing’ technology


Alan Turing ran into something besides winning WWII for England. The Turing test named after his proposition artificial intelligence would be measured by whether a machine could pass…as ‘human’ to a real human. We have arrived.


Our network of snitches (we are now prisoners of our own devices) order pizzas on our behalf without the pizza delivery guys even knowing. (Who will soon be replaced themselves in any event.) Computers have long since trounced the world’s best chess grandmasters making modern tournaments battles between automatons. The Japanese are leading the way to companion robots for the elderly and lonely. The Chinese have developed deep fake algorithms capable of animating a convincing visual and audio likeness sufficiently and persuasively deceptive enough to cause alarm.


Many phone con artists now use the most current scam bots to greatly amplify their reach and gull the unwary. The problem is the point of inflection leading to the Turing test grilling of the android in the opening scene of Bladerunner (original) is nigh upon us. If the machines are not yet more human than we, they ARE becoming more convincing in their deceptions. Moreover, the same technology is currently being used as a substitute for a judge’s insight in sentencing criminal defendants. Hal (the robot overseeing all life support systems in the 2001 spaceship) refused to admit Dave back inside the ship after killing his shipmates..


“Hal! Open the airlock door. Hal!!”

I’m sorryDaveI’m afraid I can’t do that. …”

Dave Bowman : “I don’t know what you‘re talking about, HAL.”

HAL : “I know that you and Frank were planning to disconnect me, and I’m afraid that’s something I cannot allow to happen.”


Segue, decades later, to one of the 2 737max jet airliners with hundreds of doomed passenger on board as experienced pilots unsuccessfully wrestled with the aircraft’s computer for control of the craft.


I smell $! I want to invest in the company (Microsoft already has a foot solidly wedged in the doorway of this ‘disturbing’ technology) planning on capitalizing on our innate desire to communicate with our departed dearest friends, lovers, and family–perhaps even our enemies who have satisfyingly preceded us. It’s, like money, going to be a hit. Who can resist?


There is going to be a huge industry dwarfing video games and social media. Not only will we be drawn to the departed, but will feel compelled to immortalize ourselves while we are able by becoming more prolific–posting every video, snapshot, utterance, idle thought, and amateur performance online to build the database that will become, eternally and with perfect simulation, US–forever! We already recognize the immortality of history’s giants (Lincoln, Homer, Will Rogers, Mark Twain, Melville, Voltaire, Van Gogh, Puccini) through their works. This will democratize the process of immortality, making 15 minutes of fame truly available to everyone, even the dead.


It was recently revealed that in 2017 Microsoft patented a chatbot that, if built, would digitally resurrect the dead. Using AI and machine learning, the proposed chatbot would bring our digital persona back to life for our family and friends to talk to. When pressed on the technology, Microsoft representatives admitted that the chatbot was “disturbing” and that there were currently no plans to put it into production.


Still, it appears that the technical tools and personal data are in place to make digital reincarnations possible. AI chatbots have already passed the “Turing Test”, which means they’ve fooled other humans into thinking they’re human, too. Meanwhile, most people in the modern world now leave behind enough data to teach AI programs about our conversational idiosyncrasies. Convincing digital doubles may be just around the corner.


But there are currently no laws governing digital reincarnation. Your right to data privacy after your death is far from set in stone, and there is currently no way for you to opt-out of being digitally resurrected. This legal ambiguity leaves room for private companies to make chatbots out of your data after you’re dead.


Our research has looked at the surprisingly complex legal question of what happens to your data after you die. At present, and in the absence of specific legislation, it’s unclear.


Be Right Back, an episode of the Black Mirror TV series featured a woman addicted to a chatbot representation of her dead partner.

Microsoft’s chatbot would use your electronic messages to create a digital reincarnation in your likeness after you pass away. Such a chatbot would use machine learning to respond to text messages just as you would have when you were alive. If you happen to leave behind rich voice data, that too could be used to create your vocal likeness – someone your relatives could speak with, through a phone or a humanoid robot.


Microsoft isn’t the only company to have shown an interest in digital resurrection. The AI company Eternime has built an AI-enabled chatbot that harvests information – including geolocation, motion, activity, photos, and Facebook data – which lets users create an avatar of themselves to live on after they die. It may be only a matter of time until families have the choice to reanimate dead relatives using AI technologies such as Eternime’s.


If chatbots and holograms from beyond the grave are set to become commonplace, we’ll need to draw up new laws to govern them. After all, it looks like a violation of the right to privacy to digitally resurrect someone whose body lies beneath a tombstone reading “rest in peace”.


Bodies in binary

National laws are inconsistent on how your data is used after your death. In the EU, the law on data privacy only protects the rights of the living. That leaves room for member states to decide how to protect the data of the dead. Some, such as Estonia, France, Italy, and Latvia, have legislated on postmortem data. The UK’s data protection laws have not.


To further complicate matters, our data is mostly controlled by private online platforms such as Facebook and Google. This control is based on the terms of service that we sign up to when we create profiles on these platforms. Those terms fiercely protect the privacy of the dead.


For example, in 2005, Yahoo! refused to provide email account login details for the surviving family of a US marine killed in Iraq. The company argued that their terms of service were designed to protect the marine’s privacy. A judge eventually ordered the company to provide the family with a CD containing copies of the emails, setting a legal precedent in the process.


A few initiatives, such as Google’s Inactive Account Manager and Facebook’s Legacy Contact, have attempted to address the postmortem data issue. They allow living users to make some decisions on what happens to their data assets after they die, helping to avoid ugly court battles over dead people’s data in the future. But these measures are no substitute for laws.


One route to better postmortem data legislation is to follow the example of organ donation. The UK’s “opt-out” organ donation law is particularly relevant, as it treats the organs of the dead as donated unless that person specified otherwise when they were alive. The same opt-out scheme could be applied to postmortem data.


This model could help us respect the privacy of the dead and the wishes of their heirs, all while considering the benefits that could arise from donated data: that data donors could help save lives just as organ donors do.


In the future, private companies may offer family members an agonizing choice: abandon your loved one to death, or instead pay to have them digitally revived. Microsoft’s chatbot may at present be too disturbing to countenance, but it’s an example of what’s to come. It’s time we wrote the laws to govern this technology.


This article by Edina Harbinja, Senior Lecturer in Media/Privacy Law, Aston UniversityLilian Edwards, Professor of Law, Innovation & Society, Newcastle Law School, Newcastle University, and Marisa McVey, Research fellow, Aston University, is republished from The Conversation under a Creative Commons license.

Posted in Uncategorized | Leave a comment

Psychopathy & the Origins of Totalitarianism by James Lindsay


Many of the greatest horrors of the history of humanity owe their occurrence solely to the establishment and social enforcement of a false reality. With gratitude to the Catholic philosopher Josef Pieper and his important 1970 essay “Abuse of Language, Abuse of Power” for the term and idea, we can refer to these alternative realities as ideological pseudo-realities.


Pseudo-realities, being false and unreal, will always generate tragedy and evil on a scale that is at least proportional to the reach of their grip on power—which is their chief interest—whether social, cultural, economic, political, or (particularly) a combination of several or all of these. So important to the development and tragedies of societies are these pseudo-realities when they arise and take root that it is worth outlining their basic properties and structure so that they can be identified and properly resisted before they result in sociopolitical calamities—up to and including war, genocide, and even civilizational collapse, all of which can take many millions of lives and can ruin many millions more in the vain pursuit of a fiction whose believers are, or are made, sufficiently intolerant.


The Nature of Pseudo-realities


Pseudo-realities are, simply put, false constructions of reality. It is hopefully obvious that among the features of pseudo-realities is that they must present a plausible but deliberately wrong understanding of reality. They are cult “realities” in the sense that they are the way that members of cults experience and interpret the world—both social and material—around them. We should immediately recognize that these deliberately incorrect interpretations of reality serve two related functions. First, they are meant to mold the world to accommodate small proportions of people who suffer pathological limitations on their abilities to cope with reality as it is. Second, they are designed to replace all other analyses and motivations with power, which these essentially or functionally psychopathic individuals will contort and deform to their permanent advantage so long as their pseudo-real regime can last.


Pseudo-realities are always social fictions, which, in light of the above, means political fictions. That is, they are maintained not because they are true, in the sense that they correspond to reality, either material or human, but because a sufficient quantity of people in the society they attack either believe them or refuse to challenge them. This implies that pseudo-realities are linguistic phenomena above all else, and where power-granting linguistic distortions are present, it is likely that they are there to create and prop up some pseudo-reality. This also means that they require power, coercion, manipulation, and eventually force to keep them in place. Thus, they are the natural playground of psychopaths, and they are enabled by cowards and rationalizers. Most importantly, pseudo-realities do not attempt to describe reality as it is but rather as it “should be,” as determined by the relatively small fraction of the population who cannot bear living in reality unless it is bent to enable their own psychopathologies, which will be projected upon their enemies, which means all normal people.


Normal people do not accept pseudo-reality and interpret reality more or less accurately, granting the usual biases and limitations of human perspective. Their common heuristic is called common sense, though much more refined forms exist in the uncorrupted sciences. In reality, both of these are handmaidens of power, but in pseudo-realities, this is inverted. In pseudo-reality, common sense is denigrated as bias or some kind of false consciousness, and science is replaced by a scientism that is a tool of power itself. For all his faults and the faults of his philosophy (which enable much ideological pseudo-reality), Michel Foucault warned us about this abuse quite cogently, especially under the labels “biopower” and “biopolitics.” These accusations of bias and false consciousness are, of course, projections of the ideological pseudo-realist, who, by sheer force of rhetoric, transforms limitations on power into applications of power and thus his own applications of power into liberation from it. Foucault, for any insight he provided, is also guilty of this charge.


It must be observed that people who accept pseudo-realities as though they are “real” are no longer normal people. They perceive pseudo-reality in place of reality, and the more thoroughly they take on this delusional position, the more functional psychopathy they necessarily exhibit and thus the less normal they become. Importantly, normal people consistently and consequentially fail to realize this about their reprogrammed neighbors. Perceiving them as normal people when they are not, normal people will reliably misunderstand the motivations of ideological pseudo-realists—power and the universal installation of their own ideology so that everyone lives in a pseudo-reality that enables their pathologies—usually until it is far too late.


As a result of this failure of perspective, many particularly epistemically and morally open normal people will reinterpret the claims of pseudo-reality into something that is plausible in reality under the usual logic and morals that guide our thinking, and this reinterpretation will work to the benefit of the pseudo-realists who have ensnared them. This sort of person, who stands between the real world and the pseudo-real are useful idiots to the ideology, and their role is to generate copious amounts of epistemic and ethical camouflage for the pseudo-realists. This phenomenon is key to the success, spread, and acceptance of pseudo-realities because without it very few people outside of small psychologically, emotionally, or spiritually unwell people would accept a pseudo-reality as if it is a superior characterization of the genuine article. Clearly, the more plausible the account of pseudo-reality on offer, the stronger this effect will be, and the more power the ideologues who believe in it will be able to accrue.


Pseudo-realities may have any degree of plausibility in their distorted descriptions of reality, and thus may recruit different numbers of adherents. They are often said to be accessible only by applying a “theoretical lens,” awakening a specialized “consciousness,” or by means of some pathological form of faith. Whether by “lens,” “consciousness,” or “faith,” these intellectual constructs exist to make the pseudo-reality seem more plausible, to drag people into participating in it against their will, and to distinguish those who “can see,” “are awake,” or “believe” from those who cannot or, as it always eventually goes, will not. That is, they are the pretext to tell people who inhabit reality instead of pseudo-reality that they’re not looking at “reality” correctly, which means as pseudo-reality. This will typically be characterized as a kind of willful ignorance of the pseudo-reality, which will subsequently be described paradoxically as unconsciously maintained. Notice that this puts the burden of epistemic and moral responsibility on the person inhabiting reality, not the person positing its replacement with an absurd pseudo-reality. This is a key functional manipulation of pseudo-realists that must be understood. The ability to recognize this phenomenon when it occurs and to resist it is, at scale, the life and death of civilizations.


Adoption of a pseudo-reality tends to hinge upon a lack of ability or will to question, doubt, and reject them and their fundamental presuppositions and premises of the pseudo-reality. Therefore, the “logical” and “moral” systems that operate within the pseudo-reality will always seek to manufacture this failure wherever they can, and successful pseudo-realist attacks will evolve these features like a social virus until their effectiveness is very high. This deficiency is often the direct result of mental illness, usually paranoia, schizoidia, anxiety, or psychopathy, however, so maintaining and manufacturing these states in themselves and normal people is strongly incentivized by the false “logic” and false “morality” of the ideological pseudo-reality. That is, the methods and means applied in service to a pseudo-reality will create and manipulate psychological weaknesses in people to get them to carry water for a destructive lie. The nicer, more tolerant, and more charitable a community is, supposing it lacks the capacity to spot these counterfeits early on, the more susceptible its members will tend to be to these manipulations.


Pseudo-realities and Power


The ultimate purpose of creating a pseudo-reality is power, which the constructed pseudo-reality grants in many ways. Though these means are many, we should name a few. First, the pseudo-reality is always constructed such that it structurally advantages those who accept it over those who do not, frequently by overt double standards and through moral-linguistic traps. Double standards in this regard will always favor those who accept pseudo-reality as reality and will always disfavor those who seek the truth. An ideological pseudo-reality must displace reality in a sufficient population to grant itself power to succeed in its goals. Linguistic traps will often employ strategic double meanings of words, often by strategic redefinition (creating a motte and bailey), will beg the question in ways that forces people to participate in the pseudo-reality to respond (often by Aufhebung-style, i.e., Hegelian, dialectical traps), or will begin with an assumption of guilt and demand proof of innocence such that denial or resistance is taken as proof of guilt of some moral crime against the moral system that serves the pseudo-reality (a kafkatrap). Demands will be made with sufficient vagueness such that they can never be said to have been met and such that responsibility for failure will always be the fault of the enemies of the ideology who “misunderstood” them and thus implemented them incorrectly.


Second, the very assertion of pseudo-reality demoralizes all who are pressed into engaging with it by the mere fact of being something false that must be treated as true. We should never underestimate how psychologically weakening and damaging it is to be forced to treat as true something that is not true, with the effect strengthening the more obviously false it is. Despite the fact that obviousness of the pseudo-real distortion concentrates its demoralizing power, pseudo-reality is only pseudo-real when the distortion is not immediately and wholly transparent and also when it is sufficiently widely socially accepted to become a socially constructed pseudo-truth. Whether or not the distortion is apparent, however, the situation it creates is most demoralizing for those who see through it because making the distortions of a pseudo-reality apparent to those who do not already see them is always exceptionally tedious and will be vigorously resisted not only by adherents but by useful idiots.


Thus, third, by trading off normal people’s assumptions that seemingly serious people care about what is true, they successfully force normal people to verify aspects of the pseudo-reality even in the act of denying it by getting the normal person to meet the ideologue part way. This is the relevance of pseudo-reality being pseudo-real, with greater plausibility strengthening the effect. That is, many normal people will fail to realize the pseudo-reality is false because they cannot see outside of the frame of normality that they charitably extend to all people, whether normal or not.


This dynamic bears a brief elaboration. Normal people do not tend to recognize that a broken logic and twisted morality is being used to prop up an ideological vision—a pseudo-reality—and that the mental states of the people within it (or held hostage by it) are not normal. Some among them, particularly the very but not exceptionally smart, thus skillfully reinterpret the absurd and dangerous claims of the pseudo-realist ideologues into something reasonable and sensible when, in fact, they are not reasonable or sensible. This, in turn, renders the pseudo-reality more palatable than it actually is and further disguises the distortions and underlying will to power presented by the ideological pseudo-realists. All of these features, and others, advantage the ideologue who, like some modern-day Zarathustra, speaks a pseudo-reality into existence, and all of these confer power upon that ideologue while stealing it from every participant in their social fiction, willing or not.


A Note on Ideology


As we are now speaking in terms of ideologues, we need to be clear before continuing that by “ideology” is meant here something closer to “cult ideology” than a more general meaning of the term. It is crucial to distinguish between these so that we do not confuse those sweeping approaches to contextualizing and understanding reality that are generative of comprehension of the real with those that exist in relationship with the pseudo-real.


Liberalism may, for example, be construed as an ideology, but it would not qualify as a cult ideology because, for any shortcomings it may have, it makes itself subordinate to the truth. (Indeed, this together with its incorrect general assumption of the normality of all people is why liberal systems are so susceptible to ideological pseudo-reality and thus so desperately need a vaccine against them.) That liberalism subordinates itself to an external, or objective, truth is obvious from the first principles of liberalism, which arises in the context of favoring rationalism and deferral to the greatest degree of objectivity in any circumstance it seeks to understand or dispute it aims to solve. It also explicitly sides with due processes in service to these objectives and explicitly denies any “ends justify the means” rationales. Accordingly, it exhibits none of the psychopathic tendencies that arise quite regularly in the context of ideologies that depend upon the production and maintenance of some useful but bogus pseudo-reality.


Cult Pseudo-realism and Utopianism


Though we are primarily interested in ideological pseudo-realities, perhaps the most atomic example of a pseudo-reality is not ideological in nature. It is the tragic world of the clinically deluded person, which only he accepts as the “true” state of affairs. “His reality,” “his truth,” is no one else’s because he is not a normal person, and no one is confused by this. The psychopathology involved is readily apparent to all normal people, and, if all goes well, he receives treatment, not enablement. Extending this example up by one rung on the social ladder, we can imagine that our delusional person is sufficiently charismatic and linguistically savvy to establish a cult following of fellow believers in his pseudo-reality. While a cult may not itself be ideological, it requires no effort to climb the ladder from a cult (say of personality, even) all the way up to global pseudo-real sociopolitical movements that endure over decades or even centuries (Hegel, for example, wrote The Phenomenology of Spirit in 1807).


Only two propositions are needed to understand this ladder exists from a single deluded person with a small cult around him to a massive and devastating political movement. The first is simpler: it is that otherwise psychically, emotionally, and intellectually healthy people can be manipulated into pathologies in these domains. That is, such a ladder exists because pseudo-realists are sometimes able to persuade people that the presumptions underlying their pseudo-real construction provide a better read on reality than others, which obviously happens all the time. Cults arise and can grow quite large.


The second is that cults can become ideological, and, more specifically, Utopian. This also happens with some documented frequency, especially in situations where some oversimplification of how to arrange the entire social order in which we all live takes on a glorious vision with a Utopian endpoint—literally, nowhere, in the original Greek (there are no Utopias, only dystopias). A reliable symptom that this is occurring is a vision over a very long time period (often a millennium), after which time all social ills will be cured, that nevertheless requires a revolution in the here and now to begin. These cults of pseudo-reality are very dangerous and threaten us and our civilizations even today.


The Utopian vision hiding at the heart of all (cult) ideologies provides the rationale beneath and means by which an ideological pseudo-reality is created. The pseudo-reality is a construction that misunderstands actual reality as compared against the imagined Utopia that resides at the end of the ideological rainbow. It is constructed to force as many people as possible to live within the Utopian daydream of the people who find reality less tolerable than a fictional alternative that cannot be believed without nearly universal compliance. That is, the pseudo-reality that is constructed in service to an ideology is a fantastic vision of society made perfect for certain intolerant misfits that is then turned backwards upon itself. In other words, as we shall see, Utopian ideologies are psychopathic and arise from an inability to inhabit reality (at least without treatment).


So the construction of an ideological pseudo-reality tends to be done in reverse by starting with an impossibly perfect society (in the view of particular psychopathological people) and then inventing an alternative vision of the world we actually inhabit as a kind of mythology that contains a pseudo-real explanation for why we have not yet arrived at Utopia and how we might get there yet. Details are light—specifically because no plan can replace reality with pseudo-reality—and it will be insinuated by the ideologues that they will be provided as we go. The pseudo-real Utopia will thereby be produced from reality through a process that’s rightly described as alchemical in nature—seeking to make something out of that which cannot produce it—which nearly always involves creating fundamental changes to society and the people who inhabit it. Here it bears mentioning that any injustice in the present and near future can be justified against a vision of perfection for fictitious people a thousand years hence.


Pseudo-realities as Language Games


As implied by Pieper, as can be seen even in the title of his essay from which we’re taking the term “pseudo-reality” (“Abuse of Language, Abuse of Power”) these constructions tend to arise out of abuses of language that enable abuses of power. These manipulations are therefore attractive to people with strong inclinations to control other people or to take power, particularly when they are of moderately high intelligence, relatively well-off, and linguistically savvy (while, perhaps, lacking in other more concretely valuable skills). That is, pseudo-realities are constructed by linguistically capable manipulators who wish to control other people, and it’s reasonable to assume that a sufficiently convincing (and convicting) pseudo-reality will then draw in more such people who are able to develop the pseudo-world and its fictions and then convince people it maps meaningfully onto reality in a way that it does not. The process by which they do this might most accurately be called discourse engineering, with the exact same connotation that we usually attach to the bigger project it facilitates, social engineering. Some specific types of these language games, to borrow a phrase from Wittgenstein, were mentioned briefly above.


These behaviors, even when done by the sincere person who has confused reality for a pseudo-reality, should all be seen as manipulations and abuses, though it’s always important to recognize that intention of each participating individual matters in the moral ramifications that follow from this fact. Pseudo-real world-builders tend to manipulate people upon their vulnerabilities, which is a well-known fact of cult recruitment. Thus, they are most effective on people who have an underlying baseline of psychological, emotional, or spiritual illness, particularly of the kinds that relate poorly to the real world and the rough-and-tumble social realities within it. As noted, these are also often manufactured to purpose and target the psychologically, emotionally, and spiritually susceptible, along with the naive, the angry, and the aggrieved. It is in such minds where pseudo-realist manipulations are most effective and can generate a sizable sympathizer base among otherwise normal people, some of whom will be induced into the psychopathologies that underlie the whole project. This is the real alchemy of the pseudo-realist ideological project: turning normal, mostly healthy people into psychologically, emotionally, and spiritually broken water-carriers who can no longer cope adequately with the features of reality and thus must prefer the pseudo-reality that was built to receive them—and, more importantly, to make strategic use of them.


Academic Pseudo-realities


Given the fact that they are the tool of manipulative people who exhibit high thirst for power and linguistic savvy, pseudo-realists tend to target the (bourgeois) upper-middle class whose livelihoods depend most upon their credentialing and acceptance by a group of peers, particularly the highly educated, though not most brilliant, among them. An abnormally high proportion of such individuals are employed in education, media, politics, and especially academia. (The most potent and dangerous ideological pseudo-realities are the kinds of absurdities only academics could truly believe.) Among its features, pseudo-reality, being a linguistic and social construction, enables a path to careerism and credentialing in these sorts of professions far more than in most others, which generates an incentive structure that favors the pseudo-realists’ ambitions.


Aside from base careerism among the otherwise underaccomplishing, these people are also particularly susceptible to rhetorical devices that arouse the possibility that they are insufficiently intelligent, sensitive, or spiritually enriched, and the pseudo-reality will then be presented as the proper “interpretive frame” that resolves these defects. Maybe it will be suggested, for example, that the pseudo-realist has a more complete or sophisticated understanding of reality that the intended target doesn’t or can’t understand (often by appealing to the infinitely complicated “systemic nature” of problems that are otherwise quite straightforward). Maybe a moral or spiritual attack will be made that renders them feeling unlikable by others or self (often through accusations of moral complicity and crimethink). The fact that the pseudo-reality does not conform correctly to actual reality will generate cognitive dissonance that, in the circumstances, will be usefully generative of more indoctrination into the basic premises of the pseudo-reality. This is, of course, a specific manifestation of the process of cult indoctrination and reprogramming.


This feature of pseudo-realist cultism strengthens as the mark accepts more of the premises of the pseudo-reality and thus divorces himself further and further from reality and normal people who live within it. This slowly traps adherents, who have almost no escape mechanism, even when ideological off-ramps are made plainly available. Without even mentioning that they know how their daily bread is buttered—and by, and in relation to, whom—because those who accepted pseudo-reality have distorted their understanding of the world (their epistemology) to the internal (bogus) “logic” of the pseudo-reality and have subverted their ethics (their morality) to the (evil) “moral” system employed by it, they are well and truly trapped by the ideology the pseudo-reality serves. With a distorted logic that can no longer perceive reality except as a counterfeit, they lack the necessary epistemic resources to challenge the ideology, even within themselves. With a subverted morality that perceives evil as good and good as evil in accordance with the slave morality of the pseudo-reality, their entire social environment is conditioned to keep them in a Hell whose gates are locked from the inside. Thus, to understand ideological pseudo-realities and to try to discover something we can do about them, it is necessary to examine their internal logic and moral systems in more detail.


Ideological Paralogic


Because the pseudo-reality is not real and does not correspond in any faithful way to objective reality, it cannot be described in terms that are logical. In the realm of how it thinks about the world, a pseudo-reality will employ an alternative logic—a paralogic, an illogical fake logic that operates beside logicthat has internally comprehensible rules and structure but that does not produce logical results. Indeed, it necessarily must correspond not to reality but to pseudo-reality, and it must also therefore violate the law of non-contradiction. That is, a pseudo-real paralogic will always be internally (and often unrepentantly) inconsistent and self-contradictory. This can be taken as a symptom that a paralogic is being presented in support of a pseudo-reality, as can be any sustained attack on principles of objectivity and reason.


In successful ideological pseudo-realities, the paralogic in play necessarily manipulates normal people outside of its purview into trusting their own (incorrect) assumption that the paralogic must somehow be logical (why wouldn’t it be?). Thus, normal people will (wrongly) assume that the given descriptions of the pseudo-reality must have some reasonable (real) interpretation that is intelligible by applying real logic (incorrectly) to the claims of the pseudo-realist. (Very) smart people will look for this “logical” reinpretpretation of nonsense by reflex and will thus render themselves (very smart) useful idiots.


The role paralogic plays in being parallel to logic but for a false reality is crucial to understand. It reliably leads (very) smart, thoughtful people who utterly reject the pseudo-reality—and yet who remain mostly ignorant of its paralogical structure—to carry water for the ideologues inhabiting it by normalizing it while portraying accurate critics as kooks and bad actors. In fact, these (very smart) people are generating the smokescreen to the broader normal public that makes the pseudo-reality look far more reasonable and tethered to reality than it actually is. This intellectual manipulation of (very smart) people is a crucial factor in the establishment of any successful large-scale pseudo-reality, which will only be able to maintain a relatively small proportion of true believers. Of note, nobody is better at this than an educated or credentialed liberal who stands to lose a lot by being branded a kook or bad actor by other useful idiots.


It must be recognized that the paralogical structure that serves the ideological pseudo-reality is ultimately alchemical—not chemical, not scientific, i.e., not logical—in nature. That is, it wants to make something out of nothing (and thus makes nothing out of something). More specifically, it seeks to change the substance of one “reality” into another effectively by means of a magic that does not exist. Indeed, its objective is to transmute the substance of reality as it is into what is envisioned in the pseudo-reality and the Utopia it is ultimately based upon. This means that there can be no legitimate form of disagreement with a pseudo-real paralogic, and there can be no disproof of the pseudo-reality it claims to make sense of. The paralogic, falsely appearing logical, dismisses all such contradictions. Real communism, as we have heard, for example, has apparently never been tried, and the problem was that the people who implemented it, say through the Leninist Soviet model in one design or another, didn’t properly understand it or its crucial elements. Thus, the paralogic of the ideology cannot produce philosophy but only sophistry. It cannot produce gold from lead, but it can get its sorcerers to drink mercury and drive themselves mad.


Ideological Paramorality


Alongside the paralogical structure used to trick useful idiots into defending the ideological pseudo-reality project is a powerful tool of social enforcement using an ostensibly moral dimension. A relativist might refer to this as a “moral framework” that is ethical “within the ideology,” but as it is a morality contingent not upon the facts of human existence as those exist in reality but instead as they are distorted in the constructed pseudo-reality, it would be more appropriate to refer to it as a paramorality, an immoral false morality which lies beside (and apart from) anything that deserves to be called “moral.” The goal of the paramorality is to socially enforce the belief that good people accept the paramorality and attendant pseudo-reality while everyone else is morally deficient and evil. That is, it is an inversion of morality, the slave morality as described by Nietzsche in his Genealogy of Morals.


Because the paramorality is, in fact, immoral, participants in the pseudo-reality will experience vigorous, usually totalitarian, enforcement of the ideological paramorality. It is in this way that the requisite social pressure is created to maintain the lie and its immoral system. In turn, following the cycle of abuse, they will then use the same tenets and tactics to (para)-moralize normal people outside of it, eventually far more vigorously. The trend toward puritan-style pietism, authoritarianism, and eventually totalitarianism in application of this paramorality is a virtual certainty of acceptance of an ideological pseudo-reality, and these abuses will be visited not only on every participant in the constructed fictional reality but also to everyone who can be found or placed within its shadow (which can come to include entire nations or peoples or, in fact, everyone, even those who reject it). Again, this is the true alchemy of the pseudo-realist program; it transforms normal, moral people into immoral agents who must perpetrate evil to feel good and perceive as evil those who do good.


An ideological paramorality is even less accessible to disagreement than the paralogic of an ideological pseudo-reality because it bets everything—including reality itself and the well-being of every individual who inhabits it—against Utopia, a daydream of absolute perfection. Thus, the paramorality sees only two types of people: those who accept the pseudo-reality and replace actual morality with its paramorality positioned as champions against those who must not want Utopia (and who therefore must want a world of suffering of the kind its architects are least capable of bearing). In this regard, there is no neutrality in a paramoral system, and all shades of gray are alchemically transformed into real black and pseudo-real white. Thus, in a pseudo-realist’s paramorality, there is either fully convicted support or incomprehensible (in the paralogical system) and depraved (in the paramorality) desire to see the indefinite continuation of the evils that will no longer exist when the Utopia is (technically never) realized. Vicious moralizing that will eventually justify violence, including on wide scales, is an eventual guarantee of such demands, if they are enabled sufficiently to shift that power to the ideologues.


This guarantees the paramorality of an ideological pseudo-reality will always be repressive and totalitarian. Dissent and doubt cannot be tolerated, and disagreement must be cordoned off into a moral pit that adherents dare not approach. Further, the paramorality will mandate deceptively bifurcated concepts of concepts like tolerance (which must be repressive), acceptance, compassion, empathy, fairness (all of which must be conditional and selective), merit (in regurgitating the doctrines of the pseudo-reality), and compromise (to always favor pseudo-real claims) that preposterously support the pseudo-reality, all propped up by the linguistic games at the heart of the pseudo-real ideological project. That is, specifically, the bifurcation makes these concepts completely relevant in ways that bias for its ideas, but strictly prohibited for any others. These bogus constructions are meant to unilaterally shift power to the ideologues so that their pseudo-reality can remain propped up.


It must be stressed that the paramorality in play is always an inversion of the prevailing morality that is also parasitic upon it—namely, Nietzsche’s slave morality. In other words, it is a particular type of perversion of morality that can feel more moral than moral but is, in fact, evil. This is because the paramorality acts in service to a pseudo-reality, not reality, and is thus the domain of psychopathy, which, when inflicted on the normal masses, is evil. The goal of the paramorality will always arise from and exist to favor people with particular psychopathologies who cannot otherwise cope with the discomforts of reality. This implies that an ideological pseudo-reality’s most successful means of gaining strength is through appealing to the perceived victimhood of those people and whipping up the grievances of those who have suffered similar injustices with more dignity. When widely empowered, this should be treated as another symptom of impending civilizational calamity and a need to identify and reject the pseudo-reality manipulating these feelings.


The Threads Upholding Pseudo-realities


It cannot be overstated that the pseudo-reality cannot be maintained without strenuous application and enforcement of the relevant paralogic and paramorality that have just been described. Put classically, paralogic is pathos subverting logos, and paramorality is pathos dominating ethos. No society can be healthy—or long survive—in such a state. The threads of paralogic and paramorality have to be identified and severed if we are to escape the calamities of ideological pseudo-realities. Non-contradiction and genuine moral authority are therefore fatal to ideological pseudo-realities.


These two elements—a false paralogic and an evil paramorality—are crucial to the creation, maintenance, and spread of all pseudo-realities that go beyond an unfortunate delusional individual. They are the threads holding up the entire distortion and its increasingly criminal enterprise. If these are cut in any meaningful way, so falls the entire pseudo-reality, which cannot support itself (being unreal) and will necessarily collapse under its own weight. This maneuver will have consequences, of course. It will take with it much of the society it has infected, but it will also liberate those people it has ensnared or holds hostage, both paralogically and paramorally. Learning and teaching others to identify these two threads, the paralogic and paramorality that uphold the pseudo-reality—and thus to see them as fundamentally illogical and immoral—is the key and only possible way to resist and eventually destroy a movement predicated on the social construction and enforcement of an ideological pseudo-reality.


The Caprice of the Party


Because pseudo-reality is not real, it is not possible for people it has ensnared to check any claim within it for themselves, even if they have the courage to feel inclined to do so (as it will induce a paramoralizing beating commensurate with the quantity of power that the pseudo-realists have managed to obtain). This necessitates the elevation and appointment of specialists in one or both of the paralogic and paramorality of the ideological pseudo-reality to make these determinations for everybody (in the aforementioned bifurcated way). The traditional modern name given to this cabal of corrupt “experts” is “the Party” (“Pharisees” is, probably, one more historical name). These are the people who the pseudo-reality is designed to benefit through grift and extortion, and so the paralogic twists to support their views, even when these change, and the paramorality bends to ensure they are always righteous. Professed acceptance of the pseudo-reality, skill in its paralogic, and application of its paramorality to self and others become the political test of Party commitment and access to Party spoils, and in all but the highest echelons of Party activity, these will all be routinely and viciously tested.


Again, it cannot be lost in this analysis just how crucial is the basic fact that pseudo-realities do not describe reality. This carries a number of consequences. For one thing, it commits the Party to being illogical and immoral, as it commits itself to relying upon paralogic and paramorality in place of logic and morality. As should be clear, it is to the greatest advantage of the pseudo-realists (the Party) for their paralogic to be the most illogical that it can while still passing a generic sympathizer’s sniff-test as “logical,” and it is likewise most advantageous for their paramorality to be maximally immoral in the same way.


This state of affairs is a potent weapon of demoralization in and of itself, and it lends itself to a particular caprice quite naturally—even necessarily. The Utopia will not realize (this being another thing), being that it is an object of pseudo-reality and thus not real, and in its place, there will be only the Party’s iron grip on power, maintained at any cost and by any means (and the more desperately and brutally in failure). Lacking an objective standard of reference and being without a universally accessible (in principle) appeal to reason, the discourse of the powerful (and of power itself) becomes ever more determinative. A capricious paralogic that defines as correct today but not necessarily tomorrow that which the Party says is right today but not necessarily tomorrow and a parallel paramorality that does the same trick upon what is right are superior as paralogic and paramorality, and thus they will be favored by the Party. The unfailing result is caprice from the Party, ever the favored tool of dominance and totalitarianism.


Of note, while the Party will always identify and punish scapegoats to enable its abuses and cover up its mounting failures—which are assured due to the break from reality at the heart of its project—the Party itself is the ultimate scapegoat of the pseudo-realist project. This seemingly unlikely fact is comprehensible in the paralogic (notice how it seems illogical) and demanded by the alchemical heart of the paramorality it employs. In the end, and the end will always arrive for every specific pseudo-real project, the pseudo-reality will collapse and the Party will be blamed. Just as when alchemical experiments failed, the alchemist’s spiritual purity is always called (unfalsifiably) into question, so too will the corruption of the Party by paramoral “evils” be blamed (like, having a bourgeois mentality). The “real” pseudo-real ideology will remain “unattempted” (in a sufficiently uncorrupted form), and more importantly, the general thrust of the paralogic and paramorality will therefore survive their own death (again, it can’t be logical). Christian readers will immediately recognize this as an inversion of Christianity (the inverted Cross), for God puts no one but Himself on the Cross and willingly bears in innocence the responsibility of sin for all others, thus to enable Grace, whereas this approach eschews in guilt all responsibility entirely so as to continue in the world unhindered by its own deviance.


Later, upon finding the right societal alchemical ingredients for the time, the surviving paralogical and paramoral modes will generate a new, generally identical pseudo-reality that threatens (liberal) civilization yet again. This is why it is the twin threads of the paralogic and the paramorality have to be severed to defeat pseudo-realist ideologies and vaccinate otherwise healthy societies (especially liberal ones) from their abuses. If this is done in specific to a particular pseudo-reality, then that manifestation will collapse, hopefully before it can do much damage. If this can be done in general by learning to identify and reject ideological paralogics and paramoralities as a genus of bogus intellectual and ethical activity, that is much better. This happens more or less solely through recognition: learning to spot pseudo-realities, paralogic, and paramorality, and subsequently recognizing that they are the province of psychopathies that should never be given unchecked power over normal people.


Psychopathy and Pseudo-reality


Now that we have established that an ideological pseudo-reality is all but destined, once it starts gaining sway and power, to head toward caprice, abuse, and totalitarianism of the most pernicious, dangerous, and evil forms—and to the death of civilizations and massive numbers of their inhabitants if unchecked early enough in their progression—we need to pause to understand another fine point that bears on the entire analysis. If we take a step back to consider our delusional cultist upon which the entire analysis began, we can glean another important point about the nature of ideological pseudo-realities that has been repeatedly intimated so far. That is this: it is easy to perceive that this hypothetical person not only might be but probably is psychopathic to a certain degree if he is creating a cult ideology and attendant pseudo-reality. Pseudo-reality is not the domain of the sane, by definition, and wishing to enforce one’s pathologies upon others for one’s own benefit, especially through manipulation of their vulnerabilities, is as near to a simple, general definition of psychopathy as one could hope to read.


Psychopathic ideologies will engender a number of predictable self-concentrating consequences. For one thing, they will by their nature attract and channel the vision of like-minded psychopathic opportunists (“grifters”), who will form the core of the developing Party. They will also degrade the psychological capacity of anyone who comes in contact with the ideology—for or against it. This is done through demoralization of a variety of forms, including (para)-moralizing, ostracization, dialectical trapping, and the highly useful tactic of employing “reversive blockades,” which obliterate anyone’s ability to know the truth about reality by forcing distortions from pseudo-reality upon them (which prevents their reversion toward sanity and out of the clutches of the pseudo-reality and its paralogic and paramorality). These tend to result in people not being able to discern what is true any longer and to assume the truth—whether material or moral—must be somewhere in between where they were before and the pseudo-real assertion being forced upon them. One will immediately notice that this necessarily moves the target further away from reality, as the new position will be some blend of the person’s former belief and an assertion out of pseudo-reality. One will also notice that it is a manipulation, and when paramoralizing is involved, a coercive one (to the benefit of the psychopathic ideology).


Most concerningly, psychopathic ideologies reliably generate (temporary but) functional psychopathy in otherwise normal people who, by means of these manipulations, become sufficiently convicted fellow travelers with and sympathizers to the ideology. Quite literally, aside from the direct effects of demoralization and the destabilization caused by the growing drift of their beliefs away from reality and toward unreality (pseudo-reality), a psychopathic ideology makes its sympathizers believe and act in psychopathic ways themselves, at least in a functional sense. These are the demands and costs of upholding the paralogic (so as not to be a “fool” in pseudo-reality) and paramorality (so as not to be the wrong kind of person in pseudo-reality), and slowly these victims of the ideology become the monsters they were too weak to fight. As noted previously, virtues like tolerance and empathy are intentionally perverted until they begin to bifurcate so that they carry a political valence (paramorality good, morality bad) that increasingly favors the pseudo-real ideology and becomes legitimately psychopathic as the effect strengthens.


Eventually, a normal person subjected to these circumstances ceases to be normal. This occurs when they “awaken” to a “full consciousness” in the pseudo-reality. At that point, they will have reached a place where, from their perspective, pseudo-reality is reality and reality is the pseudo-reality. That is, they will be psychopathic themselves, in thrall to the paralogic of the pseudo-real delusion and with bifurcated and narrowed ethics and moral virtues under its paramoral system. Presumably, in the majority of such previously normal people, this effect is temporary and contingent upon participation in the cult, though it is likely that some of the relevant psychological damage will be long-lasting, if not permanent. Nevertheless, in the short term, the result of this dynamic is a growing body of functionally and legitimately psychopathic people accruing more and more power for themselves, which they use (in psychopathic ways) to enforce their ideological pseudo-reality on everyone, most notably everyone else.


This process is quite exquisite. The deficiencies of the paralogic, caprice of the paramorality, and dissonance around the pseudo-reality itself will all tend to engender in the susceptible normal person a similar sense of distress about inhabiting reality as the pseudo-reality exists to enable. Obviously, this is convenient for recruitment, indoctrination, and eventual (psychopathic) reprogramming because the pseudo-reality is constructed in such a way as to enable those specific psychopathologies to flourish and avoid detection and treatment. In this regard, one might refer to the spread of a psychopathic ideology and its pseudo-reality by now-familiar phrases like “the madness of crowds,” which is more apt than one might realize at first blush, and even sociopolitical “zombification.”


Importantly, this circumstance implies that the average “fellow traveler” in a cult ideology not only does not realize they’re a cultist who is using tools and tactics of manipulation (paralogic and paramorality) on people in their lives, both normal and ideologically “awakened” fellow cultists; they cannot realize this without first abandoning the paralogic and paramorality that has captured them and rejecting the ideological pseudo-reality in a fundamental way. They find themselves in the broken position not only of being functionally psychopathic but also of being reality-inverted such that they believe all normal people who are not (yet) cultists are the cultists while they, themselves, are not. This represents a complete reversal of sanity, and the conversion of normal to ideologically psychopathic is, by that point, complete. These people, as many have learned the hard way throughout history, are the otherwise good people who are capable of perpetrating genocides.


Cutting the Threads


What, then, could possibly be the answer to this perilous and perennial tangle? Fortunately, the first step, at the least, is very simple. It’s mere awareness. It is learning to recognize the constructed pseudo-reality for what it is—a fabricated simulation of reality that is unfit for human societies—and beginning to reject unapologetically any demand to participate in it. This means refusing the analysis of the paralogic (by seeing its contradictions) and being held to account by the paramorality (by recognizing its caprice, malice, and evil) that sustain the lie. (An old word for this is “secularism,” in the non-specific sense.) In the exact instant one becomes competent at spotting the lie—or, the network of lies—held in service of a constructed pseudo-reality and its social enforcement, one already possesses the necessary perspective to break the spell of the pseudo-reality in its entirety. This, knowing the cheat for what it is, more than any other thing, is how the strings of paralogic and paramorality are cut, and with them cut pseudo-reality will come crashing down.


This can only be done by learning enough to see the games, telling the truth, and refusing to be coerced or forced to participate in the increasingly hegemonic pseudo-reality before it claims totalitarian power.  Speaking practically, there are two straightforward ways this can be done. One is to refute the pseudo-reality, and the other is to reject it.


For most people, the latter of these is easier than the former, and it requires less of someone. Strength of will and character will suffice. Simply refusing to participate in the pseudo-reality, utilize its paralogic, or bow to its paramorality—and to live one’s life as though it is utterly irrelevant to yours—is a powerful act of defiance against an ideological pseudo-reality. It requires nothing more of a person than a convicted statement that says, “This does not apply to me because it is not me” (or, “not even real”), a refusal to make decisions based in socially constructed fear and intimidation, and a willingness to live one’s life on the most normal terms possible. This is a powerful and peaceful act of defiance that many other normal people (those outside the pseudo-reality) will recognize for strength, and while it may cost you in the short term and in some ways, it will reap rewards in the long term and in others, at least up until the point that the paramoral totalitarian trap is fully sprung on a sufficiently broken and demoralized society. Just keep your head up and refuse to live your life on someone else’s (psychopathic) terms, and you will do much against such budding regimes.


Refuting pseudo-reality is harder, as it requires much more specific knowledge along with skill, strength of character, and courage. It also must be done, at least by someone, if an ideological pseudo-reality has already taken root. Such a pseudo-reality has to be shown to be a false reality, which is to say a pernicious fiction, to as many people as possible. To do it, its distortions of reality, the contradictions of its paralogic, and the evils and harms of its paramorality must all be exposed and explained as a first step. These objectives require devoting, which is in some sense wasting, a great deal of time and expending a great deal of effort intentionally learning something one knows is false and therefore (if one is successful) useless. It is also demoralizing to learn, given the psychopathic nature of the material. It’s not for the faint of heart, even if all goes well.


Commonly, also, this process will not be comfortable and requires tremendous courage of precisely the kind that ideological demoralization is very effective at eroding and containing. The paralogic will interpret direct dissent as stupid or crazy, and the paramorality will characterize it as evil (or motivated by evil intentions, even if unconscious ones outside of the dissenter’s awareness). The courage to bear these outrageous insults and slander, and to bear its unjust social consequences, is therefore a necessary precondition to putting a halt to totalitarianism. It is understandable why most will not choose this path, but be warned: the longer one waits, the worse this gets.


For those who will take up the task, the approach is a combination of being informed, being courageous, being forthright, and being subversively funny. Being informed is necessary to identify, expose, and explain the distortions of the pseudo-reality and juxtapose them with reality. It is also necessary to make use of the most decisive tool that exists against ideological pseudo-realities, which is the law of non-contradiction. Pseudo-realities and their paralogical structures always contradict reality and themselves, and exposing these contradictions exposes their lies. Being courageous and forthright is necessary to believe in oneself and one’s (real) values and thus to withstand the paramoralizing attacks and social pressure they will generate, but they inspire more of the same and restore moral authority to those who are drained of it by these distortions. Being subversive and funny undermines the psychopathy and will to power that characterize the entire ideological pseudo-realist enterprise.


Resisting effectively and with sufficient knowledge (refuting) is, of course, best, but resisting at all, even by mere refusal to participate in any obvious lie (rejecting), is also effective. This is because revealing the ideological pseudo-reality for what it is—false and irrelevant to actual reality—undermines the pseudo-reality and encourages more people to refute and reject it. Even more powerful, however, is that revealing the underlying nature of the ideological pseudo-reality—that it is psychopathic—to normal people (including those partially ensnared) ranks highly among the ways the paralogical and paramoral threads can be severed. And, a psychopathic reaction is precisely what will result from effectively resisting a psychopathic ideology. The challenging part is that you, who dares resist their games and who eludes their trap, becomes the target of their psychopathic ire, and many sympathizers who you would usually count as friends will take sides against you (there is no neutral in the paramorality). The earlier one enters this fight, the more courage it takes and yet the more valuable it is.


Some of the requisite courage to resist can be found by remembering that the pseudo-reality is not real, its paralogic is not logical, and its paramorality is not moral. That is, it’s not you; it’s them. Some more backbone can be dredged up by realizing that once the pseudo-real begins displacing the real for even a few percent of the population, the question is no longer whether things will go bad but how bad they will go before the bubble bursts. Reality will always win, and calamity comes in proportion to the size of the lie between us and it, so it is better to act sooner than later. Still more heart resides in grasping that it gets worse right up until a real resistance mounts, and then, after a rocky transition, it starts getting better. The time to act is therefore now.


The way resistance—just plain resistance—works is by restoring to the normal person the epistemic and moral authority necessary to resist the ideologue’s illegitimate demands to participate in a pseudo-real fraud. That is, it restores confidence in normality to the normal. No one feels ashamed of resisting a con, whatever form it takes, and this is the real phenomenon we face with any growing ideological pseudo-reality. Its paralogic and paramorality work to drain us of our sense of authority to know what is and is not true and what is and is not right. One’s authority only lacks under the assumptions of the paralogical and paramoral systems, however—that is, inside pseudo-reality—and it can be reclaimed by anyone who simply refuses to participate in the lie. Step outside of the pseudo-reality (take the “red pill,” as depicted in The Matrix), and you’ll see.


James Lindsay

James Lindsay

An American-born author, mathematician, and political commentator, Dr. James Lindsay has written six books spanning a range of subjects including religion, the philosophy of science and postmodern theory. He is the founder of New Discourses and currently promoting his new book “Cynical Theories: How Activist Scholarship Made Everything about Race, Gender, and Identity―and Why This Harms Everybody.”

Posted in Uncategorized | Leave a comment