Use of Social Media for Dissemination of Public Health Information: A Double-Edged Sword

Natasha Matta
5 min readDec 23, 2022

People are increasingly turning to social media as their source of news. Nearly half of adults in the United States say they get news from social media “often” or “sometimes.” People may first hear about emerging health concerns, like long COVID or monkeypox, through social media not managed by government agencies or public health organizations. Additionally, more and more people are deliberately seeking health information from social media. About one in ten users surveyed from an online patient community reported turning to social media to look for reliable health information, evaluate new treatment options, and learn about medication side effects. As social media takes on a more central role in how people receive news and seek out health information, it is important to consider the risks and benefits of disseminating public health information via social media.

Social media presents several benefits for health communication. First, posting and engaging with others’ content is quick and easy, and updates can be made in real time. Users can hear about public health issues like disease outbreaks as soon as they become a concern and can ask and answer questions through comments and replies to a post. Additionally, people do not have to actively look for health information, as they will be exposed to it passively on a platform they use primarily for entertainment. Second, social media allows users to reach a large audience with platforms like Facebook having 2.93 billion users. Third, social media makes health information accessible. Many newspapers and journals require subscriptions, while social media is free. Also, journal articles are often not written for general audiences, so they can be difficult for lay audiences to parse. As a result, it is much easier to read an infographic on Instagram, for instance, than to dissect a full-length article or paper published in a scientific journal. Finally, social media can reach younger audiences like Generation Z. Although young people are affected by public health challenges, like COVID-19, they may not have the same access to information about these issues as adults do. These topics may not be discussed by their parents or in settings like schools, so information from social media can help fill in these information gaps.

However, the growth of public health information on social media comes with its drawbacks, such as the potential for the spread of misinformation. People do not always vet the information they see or read online or the sources from which this information comes. It sounds simple, but just because a post has a high number of likes does not mean it is automatically a good source of information. For instance, celebrities or influencers have opinions that often garner many likes or comments, even though they do not have any medical expertise to back up their claims. Users may also unintentionally further spread misinformation (rather than disinformation — false information deliberately spread to mislead others) because they are uninformed.

Accurate Information

Image Credit: PAHO

Misinformation

Image Credit: Science News

While posts with accurate information and credible sources have the potential to quickly reach a large number of users on social media, so do posts that contain misinformation and unreliable sources. Myths like monkeypox being an STI and hydroxychloroquine being a viable treatment for COVID-19 can be easily perpetuated by social media and negatively influence the decisions people make about their health. Anyone can make a colorful, eye-catching infographic and post it on social media, but that does not make the information it contains true. Additionally, if people already hold false beliefs, it is easy to find other users with the same opinion, creating an echo chamber.

This begs the question of whose responsibility it is to regulate what information is posted and spread. Is it social media platforms? The government? Users? Some social media platforms have taken steps to combat misinformation, especially around the pandemic and elections. While some claim that this moderation violates the constitutional right to free speech, this is untrue because social media platforms are companies, and free speech only applies to government censorship. YouTube banned anti-vaccine information in videos, and Facebook claimed to remove over 18,000 pieces of COVID-19 and vaccine misinformation. However, vaccine misinformation remains rampant on platforms like Facebook that took action against it. Additionally, there can be misinformation on health issues besides vaccines that are harming users but are not currently being addressed by platforms given the focus on COVID.

Another concern is platforms relaxing their restrictions on misinformation. Last month, Elon Musk rolled back Twitter’s COVID-19 misinformation policy. A study at MIT found that false news spreads significantly faster on Twitter than real news as people retweet false statements more often than true ones because of their novelty. With bans on COVID misinformation lifted, false news has the potential to spread even faster and reach even more users.

Instagram and Facebook have introduced a COVID-19 Information Center with information from public health authorities, and an informational label or pop-up appears for any post that mentions COVID-19, which links to information from the CDC and WHO. However, social media platforms can and should do more to keep their users safe and informed. For instance, they could introduce a feature where experts receive a badge or verification (based on their actual qualifications, not the number of followers they have) on their profile, so users know they are qualified to talk about a public health topic like COVID. The platforms could also block users from posting about COVID-19 until their post is reviewed for scientific accuracy and approved, instead of letting people post and the platform later looking for posts with claims that violate their guidelines. Social media companies could also have more designated moderators, so the responsibility does not fall on users to report posts with misinformation.

Social media is not going anywhere, at least anytime soon. We need to start finding answers to these questions of social media platforms’ responsibility in the fight against misinformation and put in place better safeguards against the spread of misinformation about not just COVID-19, but other public health issues too.

While some responsibility naturally falls on users to vet the information they see online and determine if sources of that information are reputable, it is not necessarily reasonable to expect this. The minimum age required to create an account for most social media platforms is 13 years old. Is it fair or reasonable to ask a 13-year-old to vet information that could potentially affect their health long-term?

As it stands, it is up to us, as social media users, to vet the information we see in posts and stories. Don’t take what people say online at face value. Double-check the health-related information you come across with reputable public health organizations like the CDC and WHO. Don’t reshare posts without making sure they are credible and accurate first.

Public health organizations should devote more resources to making infographics and social media posts to leverage the benefits of social media and make scientifically accurate information more accessible. In addition, social media platforms should implement stricter regulations to combat misinformation.

Written for the Student Vaccine Working Group on Ethics and Policy.

--

--

Natasha Matta

Student at the University of Michigan | Interested in health equity & social justice