Reuters / Lucas Jackson

Sara Haider, a product-management director at Twitter, asked for feedback on some new features the company is considering on Friday. “Hey Twitter. We’ve been playing with some rough features to make it feel more conversational here,” she tweeted, sharing images of reply threading and an online-status indicator. “Still early and iterating on these ideas. Thoughts?” she asked.

While some users replied with small tweaks or suggestions (“more whitespace”), others begged Twitter to fix the one thing they feel the company continues to ignore: rampant harassment and abuse. “Talk to @jack about actually doing something instead [of] cosmetic changes,” one woman tweeted. “I don’t think Twitter shouldn’t evolve. I just think Twitter HAS to think of how features intended to help may be exploited for harm,” another user said.

Haider herself isn’t part of the team at Twitter that oversees issues related to abuse and harassment, but criticism of how the company handles such issues has reached a fever pitch. Over the past year, a slew of high-profile users, including Ed Sheeran, Millie Bobby Brown, and Wil Wheaton, have all stepped back from Twitter because of the harassment they received on the platform. Meanwhile, the company continues to announce incremental product updates that users feel ignore the real problem.

Over the past 18 months, Twitter has changed its user avatars from square-shaped to circular, redesigned Moments, added topic tags to the Explore page, spammed users’ timelines with a new “happening now” section, added endless notifications, upped the character limit to 280, promoted live video of sports events, revamped its algorithm to give older tweets more prominence, and announced plans to revoke the third-party API access that many popular apps rely on. None of these updates change the fact that outsize harassment problems have made the experience of Twitter itself miserable for many users.

While the company continues to dedicate time and resources to making minor changes aimed at boosting engagement, easy fixes for harassment are ignored. In 2016, for instance, Randi Lee Harper, the founder of the Online Abuse Prevention Initiative, laid out some options for improvement in a Medium post. Twitter has since addressed most of them, but several proposals, like auto-muting replies when someone you have blocked tweets at you or giving users with locked accounts greater ability to interact with open ones, have yet to be implemented.

There is also currently no way to opt out of inclusion in Twitter Moments. Moments was introduced in 2015 as Twitter’s way to highlight noteworthy tweets about current events, and since then it has become a widely used feature within the app. But users whose tweets are featured often immediately become targets of abuse, and, because Twitter’s curation team does not ask permission or give a heads-up before featuring a tweet, are seldom prepared for the increased attention.

Other updates, like giving users the ability to revoke DM privileges without blocking someone, or allowing users to mute words and phrases from columns on TweetDeck, could help power users curb their own experience of harassment. Some have become so desperate for a solution that they’ve endorsed radical proposals like only allowing verified accounts to tweet.

Twitter continues to emphasize that tackling abuse is a work in progress. In March, the company even solicited suggestions directly from users via a Google form. More recently, Twitter has rolled out enhanced quality filters for notifications. It’s also started giving more weight to abuse reports made by bystanders, thereby relieving victims of the burden, and temporarily restricted more accounts that demonstrate abusive behavior. When accounts are suspended for abuse, the company now tells offenders which tweets violated the platform’s rules and which rules they violated.

But despite these updates, for many users, harassment remains an inescapable part of the Twitter experience. Unless the company devotes substantial resources to tackling the problem, it’s unlikely it will be contained. Jack Dorsey, Twitter’s CEO, will likely face questions from Congress about the particular urgency of this issue as he testifies Wednesday at hearings about foreign governments’ ability to spread misinformation on the platform. As trolls and bad actors have weaponized social-media sites like Twitter, abuse and harassment campaigns can become mechanisms for spreading misinformation and divisive political content.

Some users, though, are skeptical that anything will ever change. “The annoying thing is that every few months Jack comes out with a big speech about how they’re going to fix twitter,” one user tweeted, “and ever[y] time they just continue to get it wrong.”

We want to hear what you think about this article. Submit a letter to the editor or write to letters@theatlantic.com.