Utilization of content warnings on Chief Delphi

We’ve seen some sensitive topics posted on CD lately. Both well-meaning posters and well-meaning moderators have added content warnings to many of these posts.

However, the research shows that content warnings, or trigger warnings, likely don’t work, and may cause more harm than good. Some of the reasons the warnings aren’t a great system:

  • For many people, just seeing the triggering topic mentioned is enough to trigger a negative response
  • According to at least one recent study, only 6% of people stop reading when they see such a warning
  • Such warnings create an “anticipatory affect” where people are expecting the worst (as one article put it, “before you see it, it’s a lot scarier than when you actually see it.”

Sources: Trigger warnings don't help — and could actually cause distress, studies suggest | National Post * What if Trigger Warnings Don’t Work? | The New Yorker * The latest study on trigger warnings finally convinced me they’re not worth it. * Typology of content warnings and trigger warnings: Systematic review - PMC

So, the reason I post this in the CD Forum Support category is as follows: what can we do as a community to be better about sensitive content? One thought I had is to add a category or tag for content with certain common triggers, and allow folks to opt out of seeing posts with that category/tag (ideally it’s a tag, but I don’t know if the functionality exists to block stuff based on tag). I’m sure there are other ideas out there, and I’d be interested in hearing them.

As an additional, slightly less important factor, is that adding trigger warnings as they exist now (e.g. Content Warning: sexual abuse) requires either the poster or a moderator to make a characterization of the content. We already saw a disagreement of the verbiage on abuse/harassment/assault. The distinction is not trivial, and I don’t want anyone to be in the position of having to characterize content like this, especially when they may not be well versed in the nuances or have the full story.

4 Likes

Users can mute tags, it’s under each user’s preferences.

If there’s a standard tag set for things that could trigger people, users can set for those standard tags. Posters (and mods) can apply the tag to the post.

5 Likes

What if CD added an extra step between clicking on a post and reading it for sensitive topics such as the ones with CWs, like a "Are you sure you wish to proceed this post has been flag with blah blah blah… Or does that not help because it functions pretty much the same as just having a CW?

Idk just thought maybe like what happens on Instagram and Reddit if something is flagged with being misinformation or NSFW (not saying either of those things belong on CD just an example of another forum like program with this feature).

2 Likes

I’m gonna be honest: I don’t think your post was made in good faith. It is using an extremely narrow interpretation of what content warnings are supposed to accomplish in order to argue that they are worthless. Content warnings are not instant trauma-trigger prevention tools; they are not intended to be a person’s entire strategy for dealing with trauma. They are one tool of many. All they do is to describe topics that will come up if you click the link in the thread.

“For many people”? How many is this? How many people with trauma does this statement not apply to? This is frustratingly vague.

Users who are so sensitive to a traumatic topic that they can’t even read the words “abuse” or whatever are 1. Few and far between, 2. Typically use tools to filter out such words and links from web browsing, and 3. Are more in need of a warning than anyone else, if these topics are to be included in CD.

The intent of trigger warnings is not to stop most people from reading things, it’s to prepare people to read things. They are an accessibility tool. Many people will be able to handle sensitive topics much better if they are given a quick heads up that they are coming. Much in the same way that a jump-scare is more terrifying than seeing a scary thing coming from a mile away, a post with sensitive content is easier to handle when the reader knows to expect it vs being blindsided with it.

Measuring the number of readers who choose to skip the thread altogether is a poor metric. In any case - 1 in 20 readers is still a substantial number of people who were able to make a choice they otherwise would not have.

Yeah, so that’s the point? To anticipate it. You can then decide if it’s worth the risk of being as bad as it might be.


Let’s put it another way: what if instead of “CW” the warnings just said “heads up!”, or “contains”? Would this have upset you this much? What would be the objection then?

27 Likes

Just gonna throw out there that I had no idea until 48 hours ago what “CW” meant, and happily clicked/read onwards on several posts with no idea what was coming (it took a couple of posts to put 2 and 2 together). I’d imagine I’m not the only person in that boat.

Perhaps not using an acronym for something like this may be a good idea? (ie, type out the 2 words instead of abbreviating)

10 Likes

I’m sorry you feel this way. It certainly was good faith.

I’ve heard from several people who have experienced trauma that these warnings have hurt them in the past, and I seem to always be stumbling across yet another article about how the warnings are more harmful than they are helpful. I’m not sure what the bad faith would even be here?

I totally see the perceived value. It’s one of those things I think sounds really good in theory, but it’s a relatively new phenomenon (especially on internet sites) and we still need to figure out the best way to handle this.

I also don’t see any evidence that they work. I’d welcome some counterfactuals to the sources I provided. I admit I’m no expert here, and haven’t done exhaustive research. All I know is what I’ve heard and read, so I’d love to see some counterpoints if they’re out there.

The articles I’ve cited provide similar characterization, and unfortunately don’t provide hard numbers. I haven’t conducted my own research on this. I’m sharing what I’ve found from those who have.

I think you’re missing the boat on this one. I’ll quote some additional context from one of the papers as it provides better context than I could:

Victoria Bridgeland, another study author and a phD candidate at Flinders University in Adelaide, Australia, said trigger warnings can actually enhance the attractiveness of something, called the “forbidden fruit effect.”

“If you warn someone about something, it can make it more desirable,” Bridgeland said.

What they do do, however, is increase the “anticipatory affect.” In other words, those who see a trigger warning may experience negative emotions because of the warning, but not necessarily because of the warned-against content. That said, the analysis suggests this feeling — a “bracing effect” — is “apparently completely ineffective.”

Bridgeland said it can be linked to “fear of the unknown or fear of getting told something scary coming up.

“And before you see it, it’s a lot scarier than when you actually see it,” Bridgeland explained.

I don’t think this would change anything. The phrase “trigger warning” or “content warning” isn’t the problem. It’s all the other stuff I cited above. If it says Contains: sexual harassment surely you can understand why that doesn’t address a single point I made against including the phrase Content warning: sexual harassment.


I’m sorry that my post was so off-putting to you. It comes as a surprise, as I didn’t think this would be a controversial topic. As you’ve said, there are many tools in the toolbox. I guess all I’m suggesting is maybe we consider retiring the old tool, and replacing it with a new one – or supplementing our existing tools with one more way to opt out.

I don’t think trigger warnings need to be abandoned. I’m suggesting we help some additional people avoid the trauma entirely by allowing them a way to mute content with such labels. Other social media, search engines, etc. have ways to turn off certain types of content. Why can’t we?

5 Likes

Perhaps, unless shown otherwise, we could just assume that everyone is posting in good faith. This is a difficult topic and reasonable people can disagree (and it would be nice if we could do so respectfully).

To the topic at hand, maybe using tags would be better than editing subject lines if that would make filtering easier and/or more effective for those whom that would help?

10 Likes

May be a bit out there, but would it be possible to add a “Content Warning” category? It may make it easier to moderate as well.
Perhaps muted by default as well would be a good idea, but I don’t really know how this works, at least on the technical side.

1 Like

The content warnings that have been used over the past week have been incredibly helpful to me, so I found it off-putting to see a thread titled “Content warnings don’t work”. From your first post, 6% of people clicking off after seeing a content warning seems quite high considering some people use them to better prepare to read or to come back to later when they’re in a better headspace, and the “anticipatory affect” you mention as a negative ensures I don’t get essentially jumpscared with graphic detail, which sometimes gives me flashbacks.

16 Likes

Super appreciate that feedback. If @moderators want to change the title, perhaps “Utilization of content warnings on Chief Delphi” would be a better title. I’m sorry the title was off-putting, and hope the content was less so.

4 Likes

I also wonder (genuinely) if this 6% is 6% of people who might have a particularly strong aversion (this is poorly worded but I hope it gets the point across - not everyone is affected by content the same) to the content or just the general population.

Thread title changed per request of OP

8 Likes

FWIW I consider myself a reasonably online person and had no idea what “cw” meant. I think it would be more accessible to spell out “content warning” or “warning”. Since I’d already read the Open Letter thread I knew it addressed those topics, but I didn’t know what “cw” was supposed to tell me. I guess you figure it out eventually. And as a minor nit, probably standardize on capitalization while the mod team is at it.

EDIT: One other thing (that may be specifically a function of the internet generation that I belong to), is that I would have known TW stands for trigger warning. I did some additional googling, and there doesn’t seem to be strong internet agreement on what is a CW vs TW, some sources indicate a TW is more severe than a CW, but others seem to treat them interchangeably.

4 Likes

I think the issue I’ve been taking here is based on this: every one of these content warnings has been requested by multiple users in reports for each thread in which they’ve been tagged. I’m sure you want to help, and you’re curious about good solutions, and that you aren’t sure what the best approach might be. In general, in a place of ignorance I would defer to the requests and advocacy of what trauma survivors actually say and want. This isn’t an academic debate for them, it’s lived experience. These are tools they find helpful, which is why users have been asking for them here.

The relative hostility I came in with us because you started your thread with an absolute, “content warnings don’t work”, and cited some papers arguing based on some shaky premises about what content warnings are and what they’re supposed to do. But the people who ask for them value them. Survivors have said these warnings help them. Isn’t that enough? Doesn’t that seem of more weight than whatever some NY Times thinkpiece about college lectures says?

As an aside: I have no objection personally to also using the tag system to indicate this sort of content; neither thread titles nor tags exclude one another. If someone has some Discourse specific plug-in or feature they want to bring up to help with this, that’s also up for discussion.

11 Likes

I suspect “Content Warning” developed as an alternative to “Trigger Warning” during the time period of 2015-2016 where “trigger” became a funny meme word. Thankfully, most people have either matured and realized it’s not something to trivialize or just thought the joke wore out since then.

6 Likes

I agree. I have flashbacks just from the counseling that I’ve done for too many friends and family members on these issues. The warnings let me prepare for reading something I know will make me cry. I’m still going to read it both to see if there is anything I can say that would help and because I want to be better prepared to protect my students from current threats.

I know Jared and he certainly didn’t post in bad faith. That said, it’s possible to create an opposing argument on nearly any topic that taken in isolation can lead even well-meaning people to compose such a post. The answer is to ignore the theoretical concerns and listen to those who such warnings have helped. I suspect that those who react negatively to warnings would react the same to the actual content and they would benefit more from therapy than removing the warnings.

1 Like

To be clear, I don’t think we need to remove the warnings. Anecdotally, at least, some people are getting benefit from them, which is great. I think the idea of doing tags and letting people mute those tags is an additive strategy and I’d love to seem some standardization in that space.

5 Likes

I feel that people should be able to apply filters to whatever system generates hits for forum topics. People have brought up things like a “continue?” screen or opting out of tags or whatever, but I feel like a reasonable way to manage this without using sensitive words to cause that initial curiousity as to “what could be so bad about this” could be segmenting out posts by Keywords? People have been pretty good at titling their posts with content or trigger warnings, so would an automated process that partitions potentially sensitive topics in their own category that is blocked by default, but can be enabled by a user setting? Maybe just entirely remove them from being seen by people unless they check a toggle or someting saying they’re okay with seeing sensitive/political topics, and introduce it into a user’s feed just like any other topic if they wish to do so?

As long as people know it’s an option that they have access to at all times, it should allow people that do want to delve into those topics to do it with no extra steps (after the setting is applied), and those that don’t want to see them can just leave the setting off and not even see evidence of the topic’s existence. Maybe include that as a step in account creation, including a mandatory question of “would you like to opt-out of topics that contain potentially sensitive topics” with a “yes, opt me out” or a “no, don’t opt me out” selection field, and remind the user that this setting can be changed at any time. And if it is released as a feature, make a pop-up or notification or whatever that forces the user to choose their filtering preference the first time they log in after the update is released.

This topic was automatically closed 365 days after the last reply. New replies are no longer allowed.