‘Share same interest with policy makers on user safety’

0

How do you look at user safety given the emerging regulations?

We share the same interest as policy makers when it comes to safety. We want people to be safe on our platform, we want them to be able to connect on our platform, and we think there needs to be fair industry standards, so that we’re all on the same page in terms of what’s expected of the industry and the industry then has clear guidance on what’s expected.

I think it’s important that in that guidance, we ensure that people still have access to these technologies, that they’re still competitive, that they’re still creative, that people can still make connections. I believe that with collaboration with policy makers, we can land in the right space. And we really do welcome those standards.

Big Tech has mostly asked for uniformity in regulations across the world. Does that affect how you design safety standards?

Well, we certainly want to have as much uniformity as we can. We’re building our platform at that scale, so we want to build standards at scale. That said, different countries are different and we recognize that there will be some differences that play to that. But I think this is an area where we are communicating and collaborating. We can reach something that’s close globally.

I’ll give you an example. If you think about age verification, and knowing the age of users so that we can provide age-appropriate experiences, it’s a vexing problem for all of industry. But it is something that we have taken seriously and have put in place technology to help us identify age. We also know that policy makers around the world uniformly, for the most part, think it’s important for companies to understand age and to provide age-appropriate experiences.

So, we’re seeing conversations right now, including in India, in terms of parental consent in age verification; we’re seeing those same conversations in the US, and in Europe. I think trying to find a way in which we can deliver the age-appropriate experiences, and do that globally is imperative for our company, and I think trying to set a standard that works globally is really important.

There’s some conversation around using IDs as a way to verify. There’s some value, and some countries have national ID systems, like in India. But even with those ID systems, there are many people who will, who don’t have IDs, who won’t have access if they can only present an ID. Also, IDs force industry to take in even more information than is needed to verify age. But it doesn’t mean that shouldn’t be potentially one option, but other options are potentially technology; for example, that uses the face to identify and guess the age. That’s very highly accurate and doesn’t require taking in other information. In order to do that, we have to engage with policy makers to get to that consistency.

How do you look at content safety, on what people should and shouldn’t see?

Well, we have our community standards and we try to balance it with people’s ability to express themselves, but also ensure that people are safe on our platform. In addition, we also have tools, some of them are in the background that we use to find content that might be violating (standards) and remove it from the platform. We also have borderline content, content that doesn’t necessarily violate our policies, but in the context of young people might be more problematic.

Sometimes that content at the edges can be problematic, particularly for teens. We won’t recommend it. We will age-gate it out for teen users.

Can you give us examples of these tools that work in the background?

Yeah, so going back to the age issue. Even regardless of actually verifying age, we use background technology to try to identify people who might be lying about their age and remove them if they’re under the age of 13. So maybe someone posts happy 12th birthday, that’s a signal that the person is not 13 and above, and we can use that to signal to require that person to verify their age and if they are unable to verify their age, then we will take action against that account. So, those are the kinds of signals that we use, we train and create classifiers, to identify violating content.

Have the IT rules and current regulations at all affected how you build safety mechanisms? Any tweaks you had to make?

I think we’ve not waited for regulations. We’ve heard from policy makers, well before they started regulating, what their concerns were. And we’ve worked to build solutions. Because it takes a long time to create legislation and regulation. In the meantime, we feel that we have a commitment to safety that we want to ensure for our users. So I don’t know that we have any specific changes in particular, but we have been listening to policy makers for a very long period of time and trying to meet their concerns.

How does safety change in the context of video? Do your technologies change?

I don’t know if the standards change, but certainly the technologies change. If you were to look at some of the ways that we’re trying to address safety in the metaverse, for example, it’s different.

It’s because of the complexities that are there. We actually have moderators that come into an experience and can be brought into an experience by someone who is using the platform. Which is very different, but it calls for it because it’s a dynamic space. We don’t have that in the same way, in a space that’s primarily text-based, or photo based.

How are you balancing disclosing proprietary information to policy makers that may be required to build policy around platforms?

Yeah, I think there more and more we’re seeing a push for a better understanding of our technologies. We’ve seen some legislation that has asked for risk assessments. And I think that in many ways our company has tried to be proactive in trying to provide some information around what we do, and provide ways to measure and provide accountability.

We’re trying to build those bridges, so that we can provide the kind of transparency that enables people to hold us to account, that enables people to measure our progress.

You’re right. You have to find that balance between allowing companies to protect what’s proprietary, but there are ways. As we’ve shown, there are ways to give enough information to enable policy makers to understand these things.

I think the other danger is that trying to understand today doesn’t necessarily mean that the technology would be (the same) tomorrow. So to some degree, trying to build out legislative solutions that focus on processes, without being too prescriptive is probably the best way to ensure that we develop legislation and standards that have a lifespan.

Catch all the Technology News and Updates on Live Mint.
Download The Mint News App to get Daily Market Updates & Live Business News.

More
Less

FOLLOW US ON GOOGLE NEWS

 

Read original article here

Denial of responsibility! TechnoCodex is an automatic aggregator of the all world’s media. In each content, the hyperlink to the primary source is specified. All trademarks belong to their rightful owners, all materials to their authors. If you are the owner of the content and do not want us to publish your materials, please contact us by email – [email protected]. The content will be deleted within 24 hours.

Leave a comment