Improving Media Literacy: A Refresher
- USKRG

- 2 days ago
- 25 min read
"The opposite of truth is not just a lie. The opposite of truth is chaos." — Nikki Alexander, Forensic Pathologist on BBC crime drama Silent Witness

(Source of Above Quote: Cornell University Library's "Misinformation, Disinformation, and Propaganda: Unreliable News Content")
USKRG is frequently asked about the legitimacy of internet claims being made by various sources, the most recent of which has caused unnecessary discomfort within the Korean Adoptee community. We're always here to help answer questions for adoptees to the extent of our knowledge and the knowledge of subject matter experts, however in some cases, we have the same access to sourced and verified information as the general public.
When responding to unknowns, USKRG has taken care to answer questions in terms of likelihood of incidence, which can be unsatisfying, but is more accurate than speaking in absolutes, which may be more harmful than admitting when information is not verified or incomplete. Where possible, we will correspond directly with sources with firsthand knowledge and release further information, as we have in the past. This is to prevent the spread of misinformation and to put minds at ease with confirmed information.
USKRG wishes to emphasize that KADs should not be gathering information from just our channels (or any single source of information). Below, we've put together this refresher to help everyone interested in conducting their own research on the legitimacy of online claims and online sources. While the sources to put this guide together are from professional journalistic and academic perspectives, they are still valuable for everyday media consumers looking to generally maintain or improve their critical thinking skills.
Sections
Misinformation vs Disinformation
Further Reading
Introduction
From News Literacy Project's "In Brief: Misinformation"
Few problems with our information environment are more pressing or prominent than the proliferation of misinformation online. False and misleading content is often designed to target our emotions and use our biases against us, exploiting our most deeply held beliefs and values to bypass our critical, rational thought processes.
False and misleading content is often designed to target our emotions and use our biases against us, exploiting our most deeply held beliefs and values to bypass our critical, rational thought processes.
But thinking and learning about misinformation can be challenging. Partisans lob strategic accusations of “fake news” at ideas they disagree with, or at news coverage they want to discredit. Social media platforms that have policies against misinformation fail to enforce them in ways that are consistent and effective. Bad actors who create and purposefully amplify disinformation do so for a variety of reasons, including political or financial gain or to simply to cause confusion and social division. They also employ an array of disinformation tactics and cover their tracks in clever ways.
Improving Media Literacy
From the NBCU Academy Article "Improve your Media Literacy: Think like a Journalist!"
What is media literacy?
Phipps-Smith: Media literacy teaches people how to critically evaluate all media messages: advertising, TV, movies, even video games.
Veiga: News literacy falls under the umbrella of media literacy. It’s specifically being able to identify and seek out credible news sources and make sense of the news.
Evon: People are bombarded with information on social media. Being able to sift through what is reliable and what’s not is an extremely important skill. If you’re not careful about the media that you’re consuming, you might develop some opinions that aren’t based on credible information.
How can people improve their news literacy skills?
Phipps-Smith: First, stop and slow down whenever you encounter a claim online and you feel a strong reaction.
First, stop and slow down whenever you encounter a claim online and you feel a strong reaction.
Evaluate the source. Is this coming from a reliable, standards-based news organization? Or is this clickbait or a third party that can’t be trusted?
Third, talk to the people in your life about misinformation or things that can be debunked.
What are some tips to spot and stop misinformation?
Phipps-Smith: We have this acronym, “CARES,” to walk people through fact-checking and debunking online claims:
Context – Is this being presented with the proper information?
Authenticity – Is this undoctored and unedited?
Reasoning – Does this present a logic-based argument?
Evidence – What proof backs up this claim?
Source – Does this come from a standards-based news organization or something less trustworthy?


Graphic from RumorGuard's "Guard Yourself Against Misinformation"
Why is Media Literacy Important?
From the Masterclass Article "A Basic Guide to Media Literacy"
A thorough understanding of the role of media helps you to assess the trustworthiness of the information you encounter. Media literacy is essential for several reasons:
It helps you comprehend a creator's objective. In order to develop your own perspective on the subject matter, it's essential to understand whether a piece of mass media is attempting to entertain, inform, or persuade.
It develops you as a critical thinker. Media literacy builds critical thinking skills by teaching you to thoroughly evaluate the different types of media you consume. Essential skills of inquiry are especially important in media environments where misinformation and fake news are common.
It allows you to create responsibly. Media literacy teaches ethical methods for creating your own media so that you can become a more effective communicator.
It encourages self-expression. Media literacy teaches you to form your own opinions rather than just accepting a media message at face value.
It enables you to recognize an author's point of view. Every creator makes content with a specific point of view. Knowing this helps you open your mind to different perspectives while also keeping you alert to bias.
Misinformation vs Disinformation
From the News Literacy Project's Info-graphic In Brief: Misinformation.
Misinformation vs. Disinformation
Mis- and disinformation are fundamentally exploitative in nature, often targeting our most deeply held values and beliefs to elicit a strong emotional reaction that overrides our more rational thought processes.
While the emotions most often elicited by mis- and disinformation are fear, anger and outrage, more agreeable emotions like curiosity and hope are also used to bypass our cognitive defenses.
While the emotions most often elicited by mis- and disinformation are fear, anger and outrage, more agreeable emotions like curiosity and hope are also used to bypass our cognitive defenses.
Misinformation is information that is misleading, erroneous or false. Misinformation is
generally shared — and sometimes created — by people who are unaware that it’s inaccurate. This is the best term to use when the intent of the creator or sharer is unknown.
Disinformation is a subset of misinformation that is deliberately created or shared with the intention to misinform and mislead others, usually to achieve a desired ideological, political or financial result.
Why do people share Misinformation?
Many people share misinformation unknowingly and sometimes with good or altruistic intentions — whether to articulate their perspectives, warn others away from danger or join others in trying to make sense of the world around them. But some research suggests that some people also knowingly share things they suspect are false — whether to damage “the other side” in a political debate, get social media likes and shares, or conform to their ideological identities.
Bad actors — such as hyperpartisans, trolls and even foreign agents — create and share disinformation to cause division and confusion, to promote political interests and points of view for financial gain.
What are the Seven types of Misinformation?
Cited in News Literacy Project's Info-graphic, there are seven distinct types of problematic content that sit within our information ecosystem. They sit on a scale, one that loosely measures the intent to deceive. Source: "Fake News. It's Complicated."
Satire or Parody
No Intention to cause harm but has potential to fool
False Connection
When headlines, visuals, or captions don't support the content
Misleading Content Misleading use of information to frame an issue or individual
False Context
When genuine content is shared with false contextual information
Imposter Content
When genuine sources are impersonated
Manipulated Content
When genuine information or imagery is manipulated to deceive
Fabricated Content
New Content is 100% false, designed to deceive and do harm

Graphic from "Fake News. It's Complicated."
Tips for Spotting Misinformation
From the BBC Article "The Sift Strategy: A Four Step Method for Spotting Misinformation"
1. Stop
Perhaps one of the most pernicious aspects of the modern era is its urgency. Thanks to everything from our continual phone use to nonstop work demands, far too many of us seem to be navigating the world at a dizzying speed.
Being online, where both news cycles and content are especially fast-paced and often emotive, can put us in a particularly "urgent" mindset. But when it comes to identifying misinformation, immediacy is not our friend. Research has found that relying on our immediate "gut" reactions is more likely to lead us astray than if we take a moment to stop and reflect.
Research has found that relying on our immediate "gut" reactions is more likely to lead us astray than if we take a moment to stop and reflect.
The first step of the Sift method interrupts this tendency. Stop. Don't share the post. Don't comment on it. And move on to the next step.
2. Investigate the source
Posts show up in our social media feeds all the time without us having a clear sense of who created them. Maybe they were shared by a friend. Maybe they were pushed to us by the algorithm. Maybe we followed the creator intentionally, but never looked into their background.
Posts show up in our social media feeds all the time without us having a clear sense of who created them.
Now's the time to find this out. Who created this post? Get off-platform and do a web search. And because search results can be misleading, make sure you're looking at a reputable website. One that fact-checkers often use as a first port of call might surprise you: Wikipedia. While it's not perfect, it has the benefit of being crowd-sourced, which means that its articles about specific well-known people or organisations often cover aspects like controversies and political biases.
While you're investigating, ask:
If the creator is a media outlet, are they reputable and respected, with a recognised commitment to verified, independent journalism?
If it's an individual, what expertise do they have in the subject at hand (if any)? What financial ties, political leanings or personal biases may be at play?
If it's an organisation or a business, what is their purpose? What do they advocate for, or sell? Where does their funding come from? What political leanings have they shown?
And finally, once you've run your analysis (which can take just a couple of minutes), the most telling question of all: Would you still trust this creator's expertise in this subject if they were saying something you disagreed with?
Would you still trust this creator's expertise in this subject if they were saying something you disagreed with?
3. Find better coverage
If, from the previous step, you find that you still have questions about the source's credibility, now's the time to dig a little further. What you're looking for is whether a more trustworthy source, like a reputable news outlet or fact-checking service, has reported and verified the same claim.
What you're looking for is whether a more trustworthy source, like a reputable news outlet or fact-checking service, has reported and verified the same claim.
No surprise, but I find Google has some of the best tools for doing this. Obviously, there's Google itself, and if you're specifically looking to see if news outlets have covered something, Google News.
But I sometimes prefer to use the Google Fact Check search engine, which searches just fact-checking sites, specifically. Just keep in mind that Google says it doesn't vet the fact-checking sites it includes, so to make sure your results are reputable, you'll need to do a little further sleuthing – I like to see if an outlet has signed up to Poynter's International Fact-Checking Network, which you can check here.
If it's a photo you're investigating, use a reverse image search tool to see where else the image comes up online. Google has one, but I also like TinEye and Yandex. (You can also use these for video: take a screenshot from the video and put that in for your image search).
Your goal? To see whether there are any credible sources reporting the same information as what you're seeing, and saying that it's verified.
4. Trace the claim to its original context
Often, you'll wind up doing this at the same time that you're trying to find better coverage, at least if you're using the tools mentioned above. But the idea here is a little different. You're trying to find out where the claim came from originally.
Even if you see that a claim has been reported on by a credible media outlet, for example, it may not be original reporting; they may have gotten that claim from another outlet. Ideally, the original story should be linked – so always go there – but if it's not, you may need to search for it separately.
Crucially, you want to figure out not just whether something like this really is true, but whether anything was taken out of context.
Crucially, you want to figure out not just whether something like this really is true, but whether anything was taken out of context. If you're looking at an image, does how it was described in the social media post you saw line up with what its original caption, context, and location? If it's a quotation from a speaker, was anything edited out or taken out of context or, when you see their full interview or speech, does it seem like perhaps they misspoke in that moment?
Taking these steps before deciding whether to simply share a claim might feel onerous. But the time investment of just a few minutes may save you not only embarrassment – but help ensure you're not spreading misinformation that, at its most dramatic, can even lead to illness and death.
Today, anyone can make a claim on social media. And anyone can be the person whose re-sharing of that claim is the one who makes it go viral. That means it's the responsibility of each one of us to make sure that what we are posting, liking, and sharing is, first and foremost, actually true.
Today, anyone can make a claim on social media... It's the responsibility of each one of us to make sure that what we are posting, liking, and sharing is, first and foremost, actually true.
Further Reading
Using Academic Tips to Evaluate Web Sources
From Georgetown University's Article "Evaluating Internet Content"
Unlike similar information found in newspapers or television broadcasts, information available on the Internet is not regulated for quality or accuracy; therefore, it is particularly important for the individual Internet user to evaluate the resource or information. Keep in mind that almost anyone can publish anything they wish on the Web. It is often difficult to determine authorship of Web sources, and even if the author is listed, he or she may not always represent him or herself honestly, or he or she may represent opinions as fact. The responsibility is on the user to evaluate resources effectively.
Ask yourself these questions before using resources from the Internet
Author
Is the name of the author/creator on the page?
Are his/her credentials listed (occupation, years of experience, position or education)?
Is the author qualified to write on the given topic? Why?
Is there contact information, such as an email address, somewhere on the page?
Is there a link to a homepage?
If there is a link to a homepage, is it for an individual or for an organization?
If the author is with an organization, does it appear to support or sponsor the page?
What does the domain name/URL reveal about the source of the information, if anything?
If the owner is not identified, what can you tell about the origin of the site from the address?
Purpose
Knowing the motive behind the page's creation can help you judge its content.
Who is the intended audience?
Scholarly audience or experts?
General public or novices?
If not stated, what do you think is the purpose of the site? Is the purpose to:
Inform or Teach?
Explain or Enlighten?
Persuade?
Sell a Product?
Objectivity
Is the information covered fact, opinion, or propaganda?
Is the author's point-of-view objective and impartial?
Is the language free of emotion-rousing words and bias?
Is the author affiliated with an organization?
Does the author's affiliation with an institution or organization appear to bias the information?
Does the content of the page have the official approval of the institution, organization, or company?
Accuracy
Are the sources for factual information clearly listed so that the information can be verified?
Is it clear who has the ultimate responsibility for the accuracy of the content of the material?
Can you verify any of the information in independent sources or from your own knowledge?
Has the information been reviewed or refereed?
Is the information free of grammatical, spelling, or typographical errors?
Reliability and Credibility
Why should anyone believe information from this site?
Does the information appear to be valid and well-researched, or is it unsupported by evidence?
Are quotes and other strong assertions backed by sources that you could check through other means?
What institution (company, government, university, etc.) supports this information?
If it is an institution, have you heard of it before? Can you find more information about it?
Is there a non-Web equivalent of this material that would provide a way of verifying its legitimacy?
Currency
If timeliness of the information is important, is it kept up-to-date?
Is there an indication of when the site was last updated?
Links
Are links related to the topic and useful to the purpose of the site?
Are links still current, or have they become dead ends?
What kinds of sources are linked?
Are the links evaluated or annotated in any way?
Note: The quality of Web pages linked to the original Web page may vary; therefore, you must always evaluate each Web site independently.
Using Logic to Evaluate Online Information
From the University of Iowa's University Libraries Article "Evaluating Online Information: Logical Fallacies"
Like lateral reading and identifying fake news, identifying logical fallacies is another method we can use to determine whether online information is valid. For many decades, we have relied on broadcast news organizations to filter, edit, and fact-check the information they share with us. In terms of information shared via social media, we are the ones who must do the very difficult and challenging work that used to be done by editors and fact-checkers. Learning how to identify fallacies of logic can help you know, and explain, why someone’s argument does not prove their point.
These logical fallacies are errors in reasoning. In a logical fallacy, the arguer does not provide enough evidence to support their claim.
Since the time of Ancient Greece, philosophers, logicians, and regular people have developed ways to identify types of illogical arguments. These logical fallacies are errors in reasoning. In a logical fallacy, the arguer does not provide enough evidence to support their claim. It is important to note that just because someone uses a logical fallacy, their claim may not necessarily be wrong, it simply means that the arguer has not provided either enough, or the right kind, of evidence, and therefore has not proven their point.
Ad Hominem
The ad hominem fallacy involves bringing negative aspects of an arguer, or their situation, to bear on the view they are advancing.
Example: Thompson’s proposal for the wetlands may safely be rejected because last year she was arrested for hunting without a license.
The hunter, Thompson, although she broke the law, may nevertheless have a very good plan for the wetlands. (Stanford)
Post Hoc, Ergo Propter Hoc
This is a conclusion that assumes that if 'A' occurred after 'B' then 'B' must have caused 'A.'
Example: I drank bottled water and now I am sick, so the water must have made me sick.
In this example, the author assumes that if one event chronologically follows another the first event must have caused the second. But the illness could have been caused by the burrito the night before, a flu bug that had been working on the body for days, or a chemical spill across campus. There is no reason, without more evidence, to assume the water caused the person to be sick. (Purdue)
False Dichotomy
Definition: In false dichotomy, the arguer sets up the situation so it looks like there are only two choices. The arguer then eliminates one of the choices, so it seems that we are left with only one option: the one the arguer wanted us to pick in the first place. But often there are really many different options, not just two—and if we thought about them all, we might not be so quick to pick the one the arguer recommends.
Example: “Caldwell Hall is in bad shape. Either we tear it down and put up a new building, or we continue to risk students’ safety. Obviously we shouldn’t risk anyone’s safety, so we must tear the building down.”
The argument neglects to mention the possibility that we might repair the building or find some way to protect students from the risks in question—for example, if only a few rooms are in bad shape, perhaps we shouldn’t hold classes in those rooms.
Tip: Examine your own arguments: if you’re saying that we have to choose between just two options, is that really so? Or are there other alternatives you haven’t mentioned? If there are other alternatives, don’t just ignore them—explain why they, too, should be ruled out. Although there’s no formal name for it, assuming that there are only three options, four options, etc. when really there are more is similar to false dichotomy and should also be avoided. (UNC)
Straw Man
Definition: One way of making our own arguments stronger is to anticipate and respond in advance to the arguments that an opponent might make. In the straw man fallacy, the arguer sets up a weak version of the opponent’s position and tries to score points by knocking it down. But just as being able to knock down a straw man (like a scarecrow) isn’t very impressive, defeating a watered-down version of your opponent’s argument isn’t very impressive either.
Example: “Feminists want to ban all pornography and punish everyone who looks at it! But such harsh measures are surely inappropriate, so the feminists are wrong: porn and its fans should be left in peace.”
The feminist argument is made weak by being overstated. In fact, most feminists do not propose an outright “ban” on porn or any punishment for those who merely view it or approve of it; often, they propose some restrictions on particular things like child porn, or propose to allow people who are hurt by porn to sue publishers and producers—not viewers—for damages. So the arguer hasn’t really scored any points; he or she has just committed a fallacy.
Tip: Be charitable to your opponents. State their arguments as strongly, accurately, and sympathetically as possible. If you can knock down even the best version of an opponent’s argument, then you’ve really accomplished something. (UNC)
Slippery Slope
Definition: The arguer claims that a sort of chain reaction, usually ending in some dire consequence, will take place, but there’s really not enough evidence for that assumption. The arguer asserts that if we take even one step onto the “slippery slope,” we will end up sliding all the way to the bottom; he or she assumes we can’t stop partway down the hill.
Example: “Animal experimentation reduces our respect for life. If we don’t respect life, we are likely to be more and more tolerant of violent acts like war and murder. Soon our society will become a battlefield in which everyone constantly fears for their lives. It will be the end of civilization. To prevent this terrible consequence, we should make animal experimentation illegal right now.”
Since animal experimentation has been legal for some time and civilization has not yet ended, it seems particularly clear that this chain of events won’t necessarily take place. Even if we believe that experimenting on animals reduces respect for life, and loss of respect for life makes us more tolerant of violence, that may be the spot on the hillside at which things stop—we may not slide all the way down to the end of civilization. And so we have not yet been given sufficient reason to accept the arguer’s conclusion that we must make animal experimentation illegal right now.
Like post hoc, slippery slope can be a tricky fallacy to identify, since sometimes a chain of events really can be predicted to follow from a certain action. Here’s an example that doesn’t seem fallacious: “If I fail English 101, I won’t be able to graduate. If I don’t graduate, I probably won’t be able to get a good job, and I may very well end up doing temp work or flipping burgers for the next year.”
Tip: Check your argument for chains of consequences, where you say “if A, then B, and if B, then C,” and so forth. Make sure these chains are reasonable. (UNC)
Using Lateral Reading to Evaluate Online Information
From the University of Iowa's University Libraries Article "Evaluating Online Informtion: Lateral Reading"
So-called "fake news" websites and organizations with a hidden agenda are getting very good at deception, and, in order to be responsible Internet users, we need to be more vigilant about verifying our sources. One strategy that we can use is "lateral reading."
When you find information from a source you haven't encountered before, do some research about the source BEFORE deciding whether you should listen to anything the source has to say.
Try to determine a consensus about the source by researching it using Google and Wikipedia. You can search for any of the following key components:
Publication (Usually best)
Funding organization (Can often be found in the website's "about" page)
Author
Content (Cut-and-paste the title of the website into Google)
Read a minimum of 3 to 5 new sources to see what they have to say about your original source.
If you can't find 3 to 5 sources, that is information in itself. It means your original source doesn't have an established reputation. Proceed with caution.
Once you determine a consensus from these new sources, make a judgment call about the original source's trustworthiness.
Read a minimum of 3 to 5 new sources to see what they have to say about your original source.
Understanding Creator Bias to Evaluate Online Information
From the University of Virginia Library's Article "Fake News"

Biased news presents facts, but does so selectively and/or with language that sensationalizes.
While it is tempting to call biased news "fake", there are some important differences. Fake news is without basis or fact. Biased news presents facts, but does so selectively and/or with language that sensationalizes. Almost all writing has some form of implicit bias, but the best researchers and journalists strive to acknowledge and eliminate bias in their own work. When reading news, look for signs of bias, and work to identify where an author may have omitted or skewed information or data. Learning to identify bias is important so you can get to the facts and evaluate the information for yourself.
What is Bias?
Bias is a particular inclination toward or preconceived notion of an idea or a person.
Journalists and other media specialists often rely on bias to sway our opinions and decisions.
"Bias" does not necessarily equal "untrue." It simply means that the article or video offers a skewed perspective.
Bias is a particular inclination toward or preconceived notion of an idea or a person.
Common Forms of Bias
Omission: Failing to report facts that tend to disprove an opposing viewpoint.
Story Selection: Sharing statistics, stories, or other information that support only one viewpoint, while ignoring those stories that support other viewpoints.
Placement: Prominently placing news stories that support one viewpoint in order to overshadow stories that support another.
Selection of Sources: Including more sources (sometimes called “experts” or “observers”) in a story who support one view over another.
Spin: Highlighting aspects of an issue favorable to one viewpoint without noting aspects favorable to other viewpoints.
Labeling: Tagging one individual or group with extreme labels, while not tagging others. Applying terms such as “expert" to individuals or groups without providing information on their background or ideological slant.
How to Identify Bias
To help identify bias in the media, ask yourself the following questions:
Who is the author/reporter/organization presenting the information?
Who are the sources being quoted?
From what point of view is the information presented?
Does the author/reporter/organization use stereotypes?
Does the article or video include unchallenged assumptions?
Does the headline match the story?
Is there loaded or extreme language?
Is information presented with context?
Understanding Your Own Biases to Evaluate Online Information
From the University of Iowa's University Libraries Article "Evaluating Online Informtion: Bias & Disinformation" and Northwestern University's Reposting of "12 Cognitive Biases that Prevent you from being Rational"
A recent article in The Atlantic, discusses how people react when they discover information they don't want to accept: "People see evidence that disagrees with them as weaker, because ultimately, they’re asking themselves fundamentally different questions when evaluating that evidence, depending on whether they want to believe what it suggests or not, according to psychologist Tom Gilovich. “For desired conclusions,” he writes, “it is as if we ask ourselves ‘Can I believe this?’, but for unpalatable conclusions we ask, ‘Must I believe this?’” People come to some information seeking permission to believe, and to other information looking for escape routes."
The more you know about how bias operates, the better you'll be able to manage biases when consuming information.
Since there's nothing we can do to make the world less biased, what we can do is learn how to deal with bias. The more you know about how bias operates, the better you'll be able to manage biases when consuming information.
Confirmation Bias
We love to agree with people who agree with us. It's why we only visit websites that express our political opinions, and why we mostly hang around people who hold similar views and tastes. We tend to be put off by individuals, groups, and news sources that make us feel uncomfortable or insecure about our views — what the behavioral psychologist B. F. Skinner called cognitive dissonance. It's this preferential mode of behavior that leads to the confirmation bias — the often unconscious act of
referencing only those perspectives that fuel our pre-existing views, while at the same time ignoring or dismissing opinions — no matter how valid — that threaten our world view. And paradoxically, the internet has only made this tendency even worse.
Ingroup Bias
Somewhat similar to the confirmation bias is the ingroup bias, a manifestation of our innate tribalistic tendencies. And strangely, much of this effect may have to do with oxytocin — the so-called "love molecule." This neurotransmitter, while helping us to forge tighter bonds with people in our ingroup, performs the exact opposite function for those on the outside — it makes us suspicious, fearful, and even disdainful of others. Ultimately, the ingroup bias causes us to overestimate the abilities and value
of our immediate group at the expense of people we don't really know.
Gambler's Fallacy
It's called a fallacy, but it's more a glitch in our thinking. We tend to put a tremendous amount of weight on previous events, believing that they'll somehow influence future outcomes. The classic example is coin-tossing. After flipping heads, say, five consecutive times, our inclination is to predict an increase in likelihood that the next coin toss will be tails — that the odds must certainly be in the favor of heads. But in reality, the odds are still 50/50. As statisticians say, the outcomes in different tosses are
statistically independent and the probability of any outcome is still 50%.
Relatedly, there's also the positive expectation bias — which often fuels gambling addictions. It's the sense that our luck has to eventually change and that good fortune is on the way. It also contribues to the "hot hand" misconception. Similarly, it's the same feeling we get when we start a new relationship that leads us to believe it will be better than the last one.
Post-Purchase Rationalization
Remember that time you bought something totally unnecessary, faulty, or overly expense, and then you rationalized the purchase to such an extent that you convinced yourself it was a great idea all along? Yeah, that's post-purchase rationalization in action — a kind of built-in mechanism that makes us feel better after we make crappy decisions, especially at the cash register. Also known as Buyer's Stockholm Syndrome, it's a way of subconsciously justifying our purchases — especially expensive ones. Social psychologists say it stems from the principle of commitment, our psychological desire to stay consistent and avoid a state of cognitive dissonance.
Neglecting Probability
Very few of us have a problem getting into a car and going for a drive, but many of us experience great trepidation about stepping inside an airplane and flying at 35,000 feet. Flying, quite obviously, is a wholly unnatural and seemingly hazardous activity. Yet virtually all of us know and acknowledge the fact that the probability of dying in an auto accident is significantly greater than getting killed in a plane crash — but our brains won't release us from this crystal clear logic (statistically, we have a 1 in 84 chance of dying in a vehicular accident, as compared to a 1 in 5,000 chance of dying in an plane crash [other sources indicate odds as high as 1 in 20,000]). It's the same phenomenon that makes us worry about getting killed in an act of terrorism as opposed to something far more probable, like falling down the stairs or accidental poisoning.
This is what the social psychologist Cass Sunstein calls probability neglect — our inability to properly grasp a proper sense of peril and risk — which often leads us to overstate the risks of relatively harmless activities, while forcing us to overrate more dangerous ones.
Observational Selection Bias
This is that effect of suddenly noticing things we didn't notice that much before — but we wrongly assume that the frequency has increased. A perfect example is what happens after we buy a new car and we inexplicably start to see the same car virtually everywhere. A similar effect happens to pregnant women who suddenly notice a lot of other pregnant women around them. Or it could be a unique number or song. It's not that these things are appearing more frequently, it's that we've (for whatever reason) selected the item in our mind, and in turn, are noticing it more often. Trouble is, most people don't recognize this as a selectional bias, and actually believe these items or events are happening with increased frequency — which can be a very disconcerting feeling. It's also a cognitive bias that contributes to the feeling that the appearance of certain things or events couldn't possibly be a coincidence (even though it is).
Status-Quo Bias
We humans tend to be apprehensive of change, which often leads us to make choices that guarantee that things remain the same, or change as little as possible. Needless to say, this has ramifications in everything from politics to economics. We like to stick to our routines, political parties, and our favorite meals at restaurants. Part of the perniciousness of this bias is the unwarranted assumption that another choice will be inferior or make things worse. The status-quo bias can be summed with the saying, "If it ain't broke, don't fix it" — an adage that fuels our conservative tendencies. And in fact, some commentators say this is why the U.S. hasn't been able to enact universal health care, despite the fact that most individuals support the idea of reform.
Negativity Bias
People tend to pay more attention to bad news — and it's not just because we're morbid. Social scientists theorize that it's on account of our selective attention and that, given the choice, we perceive negative news as being more important or profound. We also tend to give more credibility to bad news, perhaps because we're suspicious (or bored) of proclamations to the contrary. More evolutionarily, heeding bad news may be more adaptive than ignoring good news (e.g. "saber tooth tigers suck" vs. "this berry tastes good"). Today, we run the risk of dwelling on negativity at the expense of genuinely good news. Steven Pinker, in his book The Better Angels of Our Nature: Why Violence Has Declined, argues that crime, violence, war, and other injustices are steadily declining, yet most people would argue that things are getting worse — what is a perfect example of the negativity bias at work.
Bandwagon Effect
Though we're often unconscious of it, we love to go with the flow of the crowd. When the masses start to pick a winner or a favorite, that's when our individualized brains start to shut down and enter into a kind of "groupthink" or hivemind mentality. But it doesn't have to be a large crowd or the whims of an entire nation; it can include small groups, like a family or even a small group of office co-workers. The bandwagon effect is what often causes behaviors, social norms, and memes to propagate among groups
of individuals — regardless of the evidence or motives in support. This is why opinion polls are often maligned, as they can steer the perspectives of individuals accordingly. Much of this bias has to do with our built-in desire to fit in and conform, as famously demonstrated by the Asch Conformity Experiments.
Projection Bias
As individuals trapped inside our own minds 24/7, it's often difficult for us to project outside the bounds of our own consciousness and preferences. We tend to assume that most people think just like us — though there may be no justification for it. This cognitive shortcoming often leads to a related effect known as the false consensus bias where we tend to believe that people not only think like us, but that they also agree with us. It's a bias where we overestimate how typical and normal we are, and assume that a consensus exists on matters when there may be none. Moreover, it can also create the effect where the members of a radical or fringe group assume that more people on the outside agree with them than is the case. Or the exaggerated confidence one has when predicting the winner of an election or sports match.
The Current Moment Bias
We humans have a really hard time imagining ourselves in the future and altering our behaviors and expectations accordingly. Most of us would rather experience pleasure in the current moment, while leaving the pain for later. This is a bias that is of particular concern to economists (i.e. our unwillingness to not overspend and save money) and health practitioners. Indeed, a 1998 study showed that, when making food choices for the coming week, 74% of participants chose fruit. But when the food choice was for the current day, 70% chose chocolate.
Anchoring Effect
Also known as the relativity trap, this is the tendency we have to compare and contrast only a limited set of items. It's called the anchoring effect because we tend to fixate on a value or number that in turn gets compared to everything else. The classic example is an item at the store that's on sale; we tend to see (and value) the difference in price, but not the overall price itself. This is why some restaurant menus feature very expensive entrees, while also including more (apparently) reasonably priced ones. It's also why, when given a choice, we tend to pick the middle option — not too expensive, and not too cheap.


