As the Internet becomes a more central part of everyday life, it becomes increasingly vital that corporations and organizations doing business over the web do their due diligence. In the past, that meant a robust fraud prevention team. Fraud prevention is as important as it ever was, but now, it’s not the only thing companies have to worry about when it comes to protecting themselves and their customers.
Large organizations like Google, Twitter, and Youtube have all adopted new trust and safety teams over the past decade, but what is trust and safety? Rather than replacing fraud prevention, trust and safety encompasses this and other threats against a company’s clientele and reputation. For example, a trust and safety team might also be responsible for content moderation and maintaining marketplace fairness, in addition to addressing fraud cases.
While different organizations will have different trust and safety needs, having a team in place can be a worthwhile investment in maintaining a loyal user base.
If users don’t trust a platform to keep them safe from bad actors or offensive content, they won’t use that platform. The T&S team’s job is to maintain a safe and fair place for users to communicate and conduct business by addressing trust and safety issues.
Fraud is one significant concern that an online marketplace might have, but it’s far from the only one. Trust and safety officers can work together with fraud prevention professionals to prevent events like attack takeover fraud, chargeback abuse, the sale of counterfeit goods, and social engineering scams, but the trust and safety team also takes their responsibilities one step further.
For instance, say that someone posts offensive or harmful content in an online marketplace. It isn’t fraud and it isn’t illegal, but it does make the marketplace an unpleasant place to spend time. It damages the user’s trust in the marketplace administrators’ ability to moderate the users on their site. Thus, in this scenario, a trust and safety officer would be responsible for removing the offensive posts and taking action against the offending user if the abuse continues.
By taking the steps to ensure that community guidelines are implemented and enforced, marketplaces can show users their commitment to a secure and enjoyable experience. That trust can result in peer recommendations, more users, and a thriving online community.
The trust and safety team has diverse responsibilities that can span a number of roles and employees, depending on the size and type of platform they service. Trust and safety jobs might fall under different titles such as trust and safety specialists, analysts, and leaders, but the job duties can be divided into a few distinct categories.
Also known as a manager or supervisor, the trust and safety team lead is responsible for coordinating the rest of the trust and safety team to ensure that priorities are met in a timely fashion. The team lead might be responsible for overseeing new policy implementations, monitoring product releases, maintaining key trust and safety metrics, and supporting the other team members where needed. The T&S team lead is also the liaison between the department and the rest of the company including the fraud prevention team if the two are separate.
Operations professionals are like the backstage crew behind every T&S team. These employees are in charge of logistical concerns like budget, tools, vendor contracts, and personnel.
For example, if the content moderators have a problem with a new content filtering automation, they may turn to the trust and safety operations manager for help. The operations manager would then reference the budget, research and approve any new tools, and support the team with the implementation.
Though it may seem like a common sense which types of content should and shouldn’t be allowed on an online platform, creating a policy that reflects that can be easier said than done. How do platforms reach a balance between maintaining an equitable environment without making their users feel stifled? Policy is the groundwork that enables content moderators to act with confidence when it comes to removing harmful posts and users.
Community guidelines, for instance, are there to explain to users which types of user-generated content are allowed and which merit suspension or even banning. These rules also serve the purpose of encouraging legitimate users by showing them that the platform takes safe communication between users seriously.
Policy might also include something like a privacy policy. Privacy policies can engender trust in the users by being transparent with them about how their data will be used, if at all.
If policy writers are the rule makers, then content moderators are the rule enforcers. It’s the content moderator’s job to monitor user interactions and use a combination of user-generated reports and software to find and delete harmful content as defined by the terms of service. The content moderators may also be responsible for deciding the penalties for users that repeatedly violate community guidelines, such as suspension, banning, or the loss of certain account privileges.
Content moderators are also responsible for removing content that isn’t necessarily harmful but that does put the business at risk, such as copyright infringement or content that isn’t compliant with local regulations.
Fraud is one of the biggest threats facing online marketplaces. There are multiple reasons for platforms to prioritize fighting fraud: not only do high instances of fraud hurt the business’s reputation and customer base, but fraudulent transactions can also put a business in hot water legally. For instance, if an online marketplace is seen as complicit in the sale of illegal counterfeit goods, they could be fined or even charged.
Fraud detection also expands to other areas. As an example, the fraud prevention team might be responsible for educating and protecting users against scammers. Some marketplaces will have automatic warnings at the beginning of new user messaging history warning against sending large sums of money or giving out personal information to strangers.
Some marketplaces also automatically censor requests for phone numbers and email addresses, ensuring that all contact happens through the trusted platform where it can be reviewed if necessary. In 2020, one of the U.S. Census Bureau Trust and Safety team’s main jobs was to fight misinformation about the census online.
The responsibility for fraud detection can’t rest solely on educated consumers, however. Some instances of fraud–such as location-spoofing–can be carried out without the user having any way of knowing. In an example of location-spoofing fraud, a rideshare driver may artificially increase the length of the drive, raising prices for the user. That’s one of the reasons it’s necessary to have a fraud team in place to implement detection tools that can resist these schemes.
Preventing fraud also includes implementing tools like multi-factor authentication, digital footprint analysis for onboarding, real-time address verification for location-based platforms, and location intelligence to combat repeat offenders using fraud farms.
No one wants to establish an online community that people are going to think of as a lawless digital land to wreak havoc and commit fraud. Trust and safety teams are the professionals who ensure that the platform’s compliance and general atmosphere match the executive team’s vision for their service.
Investing the time and resources into creating a legitimate, respectful community is well worth the payoff in user satisfaction, usage metrics, and good reputation.