Chat Warnings

As part of the Twitch Safety & Privacy team, I designed the Chat Warning feature to give Twitch moderators a powerful tool for issuing warnings to problematic users, without needing to resort to suspensions or bans.

Our research showed a strong demand for a less severe way to address unacceptable behavior, as previous methods were often ignored. By integrating this built-in feature, we added authority to warnings, making them more effective.

Empathize

Understanding the Twitch Moderator Experience

In the Empathize phase of the Chat Warnings project, I focused on understanding the experiences, challenges, and pain points of Twitch moderators. This stage was crucial in grounding the design process in real-world user needs. Here’s how I approached it:

Shadowing a Twitch Moderator:

To gain firsthand insight into the day-to-day responsibilities and challenges faced by moderators, I shadowed a Twitch moderator during several live streams. This experience allowed me to observe how they manage chat interactions in real time, the tools they use, and the common issues they encounter. By immersing myself in their environment, I could see how high-pressure situations, rapid decision-making, and multitasking are essential parts of their role.

Streaming from My Own Twitch Channel:

To experience the moderator interface firsthand, I streamed from my Twitch channel and used the moderator view. This experience was eye-opening—I found the interface overwhelming and confusing, especially as a new user. This helped me empathize with both novice and experienced moderators, highlighting the steep learning curve and the potential for errors under pressure.

Reviewing Existing UXR Studies:

I also reviewed user experience research (UXR) studies that Twitch had previously conducted, focusing on moderators. These studies provided a broader context and validated the observations and insights I gathered during my hands-on activities. The data from these studies helped to identify consistent pain points, such as the complexity of the moderation tools and the cognitive load on moderators during busy streams.

By combining these methods, I was able to build a comprehensive understanding of the moderators' needs, frustrations, and motivations, setting a strong foundation for the next stages of the design process.

Twitch Moderator

Twitch Stream Manager Feature

Define

Problem Statement Development

In the Define phase, I collaborated closely with the Twitch Safety product manager to translate our research findings into a clear and actionable design challenge.

Reviewing Backlog Items with the Product Manager:

To align our efforts with the broader goals of the Twitch platform, I met with the Twitch Safety product manager to review the existing backlog of features and issues related to moderation. This was a critical step to ensure that our project was addressing the most pressing needs of the community while also fitting within the larger strategic framework.

Leveraging Insights from the Empathize Phase:

During our discussions, I drew heavily on the insights I gathered during the Empathize phase. Understanding the daily struggles of moderators, the complexities of the moderation interface, and the frequent pain points allowed me to contribute valuable context to our conversations. This ensured that the decisions we made were grounded in real user needs and not just assumptions.

Defining the Design Challenge:

Through our collaboration, we defined the following key challenge:

"How might we enable moderators to manage chat interactions more efficiently, reducing the frequency of timeouts and bans while preserving the safety and positive atmosphere of the channel?"

This challenge was deliberately framed to address the dual needs of reducing the burden on moderators while ensuring that the community standards were upheld. By focusing on both efficiency and effectiveness, we set the stage for designing a solution that would be both practical for moderators and beneficial for the overall user experience.

Ideate

Exploring Solutions

In the Ideate phase, we focused on generating and refining ideas that could effectively address the design challenge we defined. Among the various ideas in our backlog, one concept particularly resonated with both the product manager and me—a direct response to feedback we had heard repeatedly from moderators during qualitative user research studies.

Identifying the Need for a Less Severe Enforcement Tool:

Through interviews and previous UXR studies, we learned that moderators often felt constrained by the binary options of either ignoring minor infractions or issuing a timeout or ban. Many moderators expressed frustration that users did not respond well to verbal or direct message warnings. These informal approaches often failed to convey the seriousness of the situation, leading to repeated misbehavior that eventually escalated to a timeout or ban.

Conceptualizing the Chat Warning Feature:

The product manager and I identified an opportunity to create a tool that would fill this gap—a Chat Warning feature. This feature would allow moderators to issue an official-looking warning to a user that would appear directly over the chat input box. The warning would be designed to look authoritative and come directly from Twitch, making it clear to the user that their behavior was under scrutiny and that further infractions could lead to more severe consequences.

Anticipated Impact:

The goal of this feature was to provide moderators with a less severe, yet still effective, tool for enforcing community guidelines. By requiring the user to read and dismiss the warning before continuing to chat, we aimed to increase the likelihood that the user would take the warning seriously and adjust their behavior, thus reducing the need for timeouts and bans.

This ideation process helped us move forward with a clear and promising solution, setting the stage for prototyping and testing in the next phases.

Prototype

Designing a Solution

In the Prototype phase, we translated our ideas into tangible design explorations, iterating on wireframes and mockups to create a functional prototype for testing.

Initial Design Direction - Warning During a Timeout:

Our initial concept was to support a feature where moderators could issue a warning even if the user was already timed out. The rationale was that this would allow moderators to provide additional context or information to the timed-out user, potentially preventing them from pestering the moderator with direct messages asking for explanations. However, as we developed and tested this idea, we quickly realized that it introduced significant usability challenges.

Challenges with the Warning and Timeout Combination:

Having both a warning and a timeout active simultaneously proved to be visually confusing and problematic from a user experience perspective. The core issue was that once a user was timed out, there was no reasonable way for them to dismiss the warning. The timeout persisted, creating a disjointed experience where the user could not interact with the warning in a meaningful way. This made the warning feel ineffective and redundant, rather than a helpful tool for moderating behavior.

Refining the Approach - One Action at a Time:

Based on these insights, we pivoted our approach. We decided that moderators should only be able to perform one action at a time—either warn, timeout, or ban a user, but not more than one of these actions simultaneously. This decision simplified the user experience and ensured that each action carried the appropriate weight without overwhelming or confusing the user.

Iterating on the Design - Size, Font, and Tone:

With the direction more focused, we began iterating on the design of the warning itself. We experimented with various sizes for the warning modal and played with the size and weight of the font used in the message. Our goal was to strike a balance where the warning felt official and conveyed the seriousness of the situation, while also maintaining a tone that suggested the moderator was doing the user a favor—offering a tip to correct behavior before it led to more severe consequences.

This back-and-forth process of refinement was critical in ensuring that the warning was neither too aggressive nor too passive. We wanted the user to clearly understand that the warning was a serious nudge to adjust their behavior, but not so harsh that it alienated them or made them feel unfairly targeted.

Finalizing the Prototype:

After numerous iterations, we settled on a design that we believed achieved the right balance. The final mockup featured a warning modal that was appropriately sized to command attention without being overwhelming, with a font size and weight that emphasized the official nature of the message without being overly intimidating. This prototype was now ready for the next phase—testing with actual users to validate our design assumptions.

By focusing on both the functionality and the user experience of the Chat Warning feature, we created a prototype that we were confident would meet the needs of both moderators and users in a way that was effective, clear, and empathetic.

Chat Warning Wireframe V1

Chat Warning Wireframe V2

Chat Warning Wireframe V3

Chat Warning Wireframe V4

Test

Preparing for Launch

In the Test phase, we validated our design by gathering feedback from real users—Twitch moderators—to ensure that the Chat Warning feature met their needs and worked as intended.

Gathering Initial Feedback from Moderators:

Before the full rollout, we conducted a series of feedback sessions with selected Twitch moderators to introduce them to the new Chat Warning feature. We walked them through the functionality, explained the rationale behind the design decisions, and observed how they interacted with the feature. These sessions were invaluable for identifying any potential issues or areas of confusion that we might have overlooked.

The feedback from these initial sessions was overwhelmingly positive. Moderators appreciated having a tool that allowed them to enforce chat rules in a way that was less severe than a timeout or ban. They also valued the official appearance of the warning, which helped communicate the seriousness of the infraction to users without needing to resort to more punitive measures.

Rolling Out the Feature:

After making minor adjustments based on the feedback from the initial sessions, we proceeded with a full rollout of the Chat Warning feature. The feature was quickly adopted by the Twitch moderator community, with many moderators reporting that it filled a much-needed gap in their toolkit.

Positive Response and Adoption:

The response to the Chat Warning feature was overwhelmingly positive. Moderators found it to be an effective way to manage minor infractions, and it helped reduce the number of timeouts and bans they needed to issue. The feature was praised for its ease of use, clarity, and the way it empowered moderators to maintain a positive and safe atmosphere in their channels without being overly harsh.

The successful adoption of the Chat Warning feature validated our design approach and demonstrated the importance of listening to and addressing the specific needs of end users. This phase highlighted the value of iterative design, user feedback, and a user-centered approach in creating tools that make a meaningful impact on user experience.

This final phase not only confirmed the effectiveness of the Chat Warning feature but also reinforced the importance of the design thinking process in developing solutions that resonate with the community they are intended to serve.

Chat Warning Modal Breakdown

Chat Warning Over Chat on Web

Chat Warning Over Chat on Mobile App