The internet has introduced dozens of new ways for people to rely on and connect with one another. However, it has also introduced new ways for people to communicate anonymously and even lie about their identities. When the anonymous nature of the internet extends into real-life through ride-share apps, peer-to-peer marketplaces, or dating apps, a very real risk presents itself. So, how do online platforms take responsibility for the safety of their users and protect their brand reputation?
The role of trust and safety teams is to protect community participants and the reputation of the marketplace itself from harm. The purpose is two-fold: marketplaces don’t want their customers to come to any harm while using their platforms, and it’s bad for businesses if users don't trust a marketplace enough to join or use it.
So what kinds of harm do trust and safety professionals safeguard against? In a peer-to-peer marketplace, they might focus on ensuring that scammers don’t put up fake posts to cheat people out of their money. On a dating app, it might be making sure that people are who they say they are, and that users can report users who engage in inappropriate behavior.
Regardless of the type of platform, the trust and safety team’s job is to ensure that the community is a fair and secure place for people to interact or do business. Additionally, the T&S team is responsible for engendering user trust in the platform, meaning that users trust the platform not to abuse their data or violate their privacy.
Here are a few different things a trust and safety team can do to maintain a safe and user-friendly experience on their platform.
Trust and safety professionals may undertake a range of different responsibilities and procedures to moderate online communities.
Though many users blindly scroll through them before clicking, “I Accept,” terms and conditions can be a vital piece of keeping a marketplace or app safe from abuse. When a user repeatedly violates the community guidelines, the administrators can make the decision to suspend them, ban their account, or remove their access to certain privileges.
Requiring users to accept terms & conditions lays the groundwork for community expectations and allows administrators to take action against bad actors without seeming too arbitrary or quick to block users.
A burner account is a fraudster’s best friend. Easy sign-ups and onboarding are good for UX, but unfortunately, they also enable people with bad intentions.
That doesn’t mean that balancing high UX, low friction, and low fraud risk is impossible–it just requires better authentication methods. Low friction and low fraud risk are complementary goals: the easier an authentication method is for the user, the more likely they’ll use it consistently.
Tools like digital footprint analysis and real-time address validation can help verify that users are who and where they say they are, without introducing extra effort on the user’s end.
Account takeovers or ATOs can be severely damaging for both the individual users affected and the company’s reputation as a whole. What’s worse, once one account is compromised, a hacker can use the information inside that account to scam other users. Stronger authentication methods such as multi-factor and spoof-resistant location intelligence can help keep cyber criminals out of legitimate user accounts.
Part of trust and safety is about maintaining a community environment that users want to be active participants in. When inappropriate or even disturbing user-generated content is allowed on a platform with no consequences, it can damage the community’s reputation and encourage more abusive users to join. Trust and safety teams need tools in place to help monitor and remove offensive content, as well as to address harassment between individual users.
Trust and safety professionals may also be responsible for reviewing user reports, making decisions to suspend or ban certain user accounts, reporting illegal content or transactions to law enforcement, and opening fraud investigations where necessary.
So, what is the difference between trust & safety and fraud prevention? Some may think that T&S is just a clever rebranding, but there are important differences between the two fields.
While fraud detection and prevention is part of a trust and safety team’s purview, it’s not the only way they maintain the integrity of their marketplace. On the other hand, while there is overlap with some of the methods T&S uses to keep users safe, fraud prevention teams will be much more focused on specifically protecting the community from fraud, and they likely won’t spend time on things like community guideline drafting or content moderation.
T&S and fraud prevention may share team members, be separate teams, or be part of the same department. Processes like fraud investigations, handling authentication, and monitoring users for signs of misuse are some responsibilities that both teams may share.
No one wants to use a service that they don’t trust to do right by themselves and other users. Digital trust is the “trust” half of the T&S equation. It means building a secure digital environment where users can transact and communicate without fear of harassment, mistreatment, fraud, harm, or privacy violations.
It’s vital that companies pay attention to their digital trust and safety profile, as it can directly affect how many users join and stay on their platform.
When users exchange with each other in an online community, they aren’t just putting their trust in one another. They’re also trusting the platform to protect them from fraud and abuse. That’s a big responsibility, and one that platforms have to take seriously if they want to survive and stay competitive.
Establishing trust and safety is a positive cycle: T&S employees moderate content and ban users who post abusive messages, other users see that the platform is an enjoyable place to spend time and transact, more users join, and so on. Neglecting trust and safety is a negative cycle. When users see that the people running the marketplace don’t care to maintain it, they leave, and bad actors multiply to take their place.
By enforcing community guidelines, using spoof-resistant authentication, and verifying user identities, trust and safety teams can protect users, their platforms, and their reputation from misuse.