As the saying goes, even the best laid plans often go awry. Developers can set out with the goal of creating a safe and digitally trustworthy app, but without concrete steps to enforce this vision, it isn’t likely to come true. Likewise, app administrators need plans and procedures in the case of an emergency like a data breach, account takeover, or high incidences of fraud and abuse.
The same way that in-person businesses need a safety plan for fires and natural disasters, mobile apps need a procedure in case something goes wrong in the digital world. How do they decide what constitutes serious abuse of the platform? What’s the best course of action after detecting multiple fraudulent accounts?
Having a touchstone to refer back to in terms of tumult and uncertainty can ensure that trust and safety teams react in the fairest way possible. It also assures users and employees alike that the company takes their safety seriously and is committed to protecting them from bad actors and events. So, how do mobile apps create this touchstone?
A company safety policy is a document stating a company’s commitment to the safety of its employees and clients as well as an outline of what action to take in certain situations. Safety policies might also include rules about what sorts of conduct are prohibited to encourage a safe working environment.
In other words, the purpose of a safety policy statement is two-fold: to let employees and customers know that their safety is being taken seriously, and to outline procedures that help facilitate that safety.
In mobile apps, a safety policy might also be referred to as a trust and safety policy. Trust and safety teams are responsible for drafting policies, moderating users and content, and preventing fraud in online spaces. They may have a general trust and safety policy available for users to read as well as different, more specific policies for community conduct and privacy.
Creating a trust and safety policy is a massive undertaking for a mobile app or platform. The policy has to cover multiple areas of safety and use, but it must also be relevant to the app’s niche. For example, a ride-sharing app will need a different set of rules and guidelines than a social media app.
Depending on an app’s needs, size, and age, they may hire an outside agency to craft a policy using a model, or they may hire a dedicated policy writer as part of a broader trust and safety team. The ultimate goal of a safety policy is to both build digital trust and give users and enforcers something to reference for acceptable and unacceptable uses of the platform.
Though the needs of different industries and apps will vary widely, there are a few key points that most trust and safety policies should hold in common.
When people communicate and do business with others online, they want to know that those other users are whom they say they are. This is especially true for marketplace apps where users may meet each other in person to exchange goods or receive services–no one wants to book a gorgeous vacation home or clean-looking car over an app only to find the opposite is true in person.
In the official AirBnB safety policy, they cite misrepresenting oneself and misrepresenting one’s properties as violations of their community standards.
Not only does a lack of authenticity damage the user experience, but it also has the potential to become dangerous if left unchecked by moderators. Some fraud detection tools, such as real-time address verification, can be used to help app administrators verify what users self-report.
Safety should always be one of a platform’s top priorities. Threatening statements, theft, and violence all pose a threat to people’s safety and should be treated as no-tolerance offenses. Safety policies also often include scenarios where the offender doesn’t hurt others directly, but rather creates an unsafe situation–such as a ride-share driver who drives recklessly while carrying passengers.
Fraud and user security are also vital parts of any trust and safety policy. Actions like account takeovers, location spoofing, bonus abuse, and phishing or smishing can all have a negative impact on both the users and the platform as a whole. It’s the platform’s responsibility to make its stance on fraud clear and to employ measures such as robust authentication to detect fraud.
There are a few different ways for platforms to respond to content or actions they determine violate their trust and safety policies. It’s important that moderators strike a balance between upholding trust and safety and not punishing users unfairly. Users deserve to feel safe on a platform without also feeling like they have to walk on eggshells about what they do and say.
When a user posts inappropriate content, the content moderators have a few choices in how to deal with it. They can remove the post, flag the post with a sensitive content warning, issue a strike against the user, or even ban the user entirely. Strike systems can help keep track of users with a history of abusive content and limit repeat offenses.
In the case of different apps like ride-share services and other marketplaces, administrators can limit certain account permissions or ban accounts with multiple offenses. When it comes to services where app users may meet merchants in person, the severity of the abuse and whether or not it poses a safety threat to users must also be considered.
Appeals processes are another vital piece of the enforcement puzzle to consider. While trust and safety moderation is important, it’s also important to ensure that users feel they’re being treated fairly and that they have recourse. Otherwise, the platform could gain a reputation for being overly censorious or for, in the case of marketplace apps, treating merchants unfairly.
Creating a safety policy is one way that apps can realize their vision of the types of employees and clientele they would like to serve. By setting forth the rules, standards, and expectations that the company will use to moderate the community and protect its users, the platform can invite the kinds of users that want to engage with a safe, trustworthy community.
Solid safety policies are also the foundation of a trust and safety initiative, which helps brands address the digital trust deficit and encourage user loyalty. Taking the time to make a policy shows users that the platform cares about the kind of experience they have while using it.