5.2.1: Internet, Social Media, and the Age of Algorithms
As we might already assume, most people get health information via internet sources in the 21st Century. When faced with a new symptom or health decision, the first action is often a simple internet search. Which websites top the results list may be determined by advertising dollars, or by search engine algorithms that tailor results to the person’s past search history. Further direct-to-consumer advertising by medical and pharmaceutical companies can be a result of a search, viewing a website or post on social media. These algorithms and targeted ads may affect our health decisions far more than we realize (Stark & Fins, 2013).
Algorithms also affect our social media experience. Although a user may initially choose whom to follow, social media platforms will typically recommend similar pages based on that individual’s use data. User-generated content can be quickly shared with a broad audience, often without the identification of the original source or any verification of the information. And social media platforms are often used to spread an ideology or sell products. Take, for example, the anti-vaccine sentiment after the COVID-19 vaccine rollout. It was discovered that a majority of the anti-vaccine misinformation posts on social media sites Facebook and Twitter were created by the same 12 individuals (many of whom had been sharing health disinformation for years already) (Bardosh et al., 2022). When these posts garner equal - if not more - attention in the media than posts from reputable sources, misinformation spreads farther and faster than correct information. Similar to the analogy of attempting to put out a million small fires at once, it can be difficult to combat this type of misinformation with well-researched rebuttals in a timely fashion. As a result, it is still left up to the viewer to do the due diligence of validating the information presented to them, which is often a more lengthy task than most people are willing to take on.
As humans, we already struggle with a variety of cognitive biases that cloud our decision-making. Availability bias (more of “it” must make it true), confirmation bias (searching for evidence to support what we already believe), and overconfidence bias (feeling we know more than we do), are all exacerbated by the social media and search engine algorithms (Stark & Fins, 2013). Our “engagement” with certain posts is tracked, and we are subsequently fed similar posts - thus cementing our own personal “reality bubble”. With the advent of readily available artificial intelligence the access to information is potentially limitless, but the quality of that information remains malleable by both commercial interests and machine learning algorithms.