Home Artificial Intelligence Arjun Narayan, Head of Global Trust and Safety for SmartNews – Interview Series

Arjun Narayan, Head of Global Trust and Safety for SmartNews – Interview Series

1
Arjun Narayan, Head of Global Trust and Safety for SmartNews – Interview Series

Arjun Narayan, is the Head of Global Trust and Safety for SmartNews a news aggregator app, he can be an AI ethics, and tech policy expert.  SmartNews uses AI and a human editorial team because it aggregates news for readers.

You were instrumental in helping to Establish Google’s Trust & Safety Asia Pacific hub in Singapore, what were some key lessons that you just learned from this experience?

When constructing Trust and Safety teams country-level expertise is critical because abuse may be very different based on the country you’re regulating. For instance, the way in which Google products were abused in Japan was different than how they were abused in Southeast Asia and India. This implies abuse vectors are very different depending on who’s abusing, and what country you are based in; so there is not any homogeneity. This was something we learned early.

I also learned that cultural diversity is incredibly vital when constructing Trust and Safety teams abroad. At Google, we ensured there was enough cultural diversity and understanding throughout the people we hired. We were in search of individuals with specific domain expertise, but in addition for language and market expertise.

I also found cultural immersion to be incredibly vital. Once we’re constructing Trust and Safety teams across borders, we wanted to make sure our engineering and business teams could immerse themselves. This helps ensure everyone seems to be closer to the problems we were trying to administer.  To do that, we did quarterly immersion sessions with key personnel, and that helped raise everyone’s cultural IQs.

Finally, cross-cultural comprehension was so vital. I managed a team in Japan, Australia, India, and Southeast Asia, and the way in which through which they interacted was wildly different. As a frontrunner, you ought to ensure everyone can find their voice. Ultimately, that is all designed to construct a high-performance team that may execute sensitive tasks like Trust and Safety.

Previously, you were also on the Trust & Safety team with ByteDance for the TikTok application, how are videos which can be often shorter than one minute monitored effectively for safety?

I would like to reframe this query a bit, since it doesn’t really matter whether a video is brief or long form. That isn’t an element after we evaluate video safety, and length doesn’t have real weight on whether a video can spread abuse.

When I believe of abuse, I believe of abuse as “issues.” What are a number of the issues users are vulnerable to? Misinformation? Disinformation? Whether that video is 1 minute or 1 hour, there remains to be misinformation being shared and the extent of abuse stays comparable.

Depending on the difficulty type, you begin to think through policy enforcement and safety guardrails and the way you’ll be able to protect vulnerable users. For instance, as an example there is a video of somebody committing self-harm. Once we receive notification this video exists, one must act with urgency, because someone could lose a life. We rely loads on machine learning to do the sort of detection. The primary move is to at all times contact authorities to try to save that life, nothing is more vital. From there, we aim to suspend the video, livestream, or whatever format through which it’s being shared. We’d like to make sure we’re minimizing exposure to that type of harmful content ASAP.

Likewise, if it’s hate speech, there are alternative ways to unpack that. Or within the case of bullying and harassment, it really relies on the difficulty type, and depending on that, we might tweak our enforcement options and safety guardrails. One other example of a very good safety guardrail was that we implemented machine learning that might detect when someone writes something inappropriate within the comments and supply a prompt to make them think twice before posting that comment. We wouldn’t stop them necessarily, but our hope was that folks would think twice before sharing something mean.

It comes all the way down to a mixture of machine learning and keyword rules. But, on the subject of livestreams, we also had human moderators reviewing those streams  that were flagged by AI in order that they could report immediately and implement protocols. Because they’re happening in real time, it’s not enough to depend on users to report, so we’d like to have humans monitoring in real-time.

Since 2021, you’ve been the Head of Trust, Safety, and Customer experience at SmartNews, a news aggregator app. Could you discuss how SmartNews leverages machine learning and natural language processing to discover and prioritize high-quality news content?

The central concept is that we’ve got certain “rules” or machine learning technology that may parse an article or commercial and understand what that article is about.

Every time there’s something that violates our “rules”, as an example something is factually incorrect or misleading, we’ve got machine learning flag that content to a human reviewer on our editorial team. At that stage, a they understand our editorial values and may quickly review the article and make a judgement about its appropriateness or quality. From there, actions are taken to handle it.

How does SmartNews use AI to make sure the platform is secure, inclusive, and objective?

SmartNews was founded on the premise that hyper-personalization is sweet for the ego but can be polarizing us all by reinforcing biases and putting people in a filter bubble.

The way in which through which SmartNews uses AI is slightly different because we’re not exclusively optimizing for engagement. Our algorithm wants to grasp you, however it’s not necessarily hyper-personalizing to your taste. That’s because we consider in broadening perspectives. Our AI engine will introduce you to concepts and articles beyond adjoining concepts.

The concept is that there are things people have to know in the general public interest, and there are things people have to know to broaden their scope. The balance we attempt to strike is to supply these contextual analyses without being big-brotherly. Sometimes people won’t just like the things our algorithm puts of their feed. When that happens, people can decide to not read that article. Nonetheless, we’re pleased with the AI engine’s ability to advertise serendipity, curiosity, whatever you ought to call it.

On the protection side of things, SmartNews has something called a “Publisher Rating,” that is an algorithm designed to always evaluate whether a publisher is secure or not. Ultimately, we wish to determine whether a publisher has an authoritative voice. For instance, we will all collectively agree ESPN is an authority on sports. But, if you happen to’re a random blog copying ESPN content, we’d like to be certain that ESPN is rating higher than that random blog. The publisher rating also considers aspects like originality, when articles were posted, what user reviews seem like, etc. It’s ultimately a spectrum of many aspects we consider.

One thing that trumps all the things is “What does a user wish to read?” If a user desires to view clickbait articles, we can’t stop them if it’s not illegal or breaks our guidelines. We do not impose on the user, but when something is unsafe or inappropriate, we’ve got our due diligence before something hits the feed.

What are your views on journalists using generative AI to help them with producing content?

I feel this query is an ethical one, and something we’re currently debating here at SmartNews. How should SmartNews view publishers submitting content formed by generative AI as an alternative of journalists writing it up?

I feel that train has officially left the station. Today, journalists are using AI to enhance their writing. It is a function of scale, we do not have the time on the planet to provide articles at a commercially viable rate, especially as news organizations proceed to chop staff. The query then becomes, how much creativity goes into this? Is the article polished by the journalist? Or is the journalist completely reliant?

At this juncture, generative AI isn’t able to put in writing articles on breaking news events because there is not any training data for it. Nonetheless, it might still provide you with a fairly good generic template to achieve this. For instance, school shootings are so common, we could assume that generative AI could give a journalist a prompt on school shootings and a journalist could insert the varsity that was affected to receive a whole template.

From my standpoint working with SmartNews, there are two principles I believe are price considering. Firstly, we wish publishers to be up front in telling us when content was generated by AI, and we wish to label it as such. This fashion when individuals are reading the article, they are not misled about who wrote the article. That is transparency at the very best order.

Secondly, we wish that article to be factually correct. We all know that generative AI tends to make things up when it wants, and any article written by generative AI must be proofread by a journalist or editorial staff.

You’ve previously argued for tech platforms to unite and create common standards to fight digital toxicity, how vital of a difficulty is that this?

I feel this issue is of critical importance, not only for firms to operate ethically, but to keep up a level of dignity and civility. In my view, platforms should come together and develop certain standards to keep up this humanity. For instance, nobody should ever be encouraged to take their very own life, but in some situations, we discover the sort of abuse on platforms, and I feel that’s something firms should come together to guard against.

Ultimately, on the subject of problems of humanity, there should not be competition. There shouldn’t even necessarily be competition on who’s the cleanest or safest community—we should always all aim to make sure our users feel secure and understood. Let’s compete on features, not exploitation.

What are some ways in which digital firms can work together?

Corporations should come together when there are shared values and the potential of collaboration. There are at all times spaces where there’s intersectionality across firms and industries, especially on the subject of fighting abuse, ensuring civility in platforms, or reducing polarization. These are moments when firms must be working together.

There’s in fact a industrial angle with competition, and typically competition is sweet. It helps ensure strength and differentiation across firms and delivers solutions with a level of efficacy monopolies cannot guarantee.

But, on the subject of protecting users, or promoting civility, or reducing abuse vectors, these are topics that are core to us preserving the free world. These are things we’d like to do to make sure we protect what’s sacred to us, and our humanity. In my view, all platforms have a responsibility to collaborate in defense of human values and the values that make us a free world.

What are your current views on responsible AI?

We’re firstly of something very pervasive in our lives. This next phase of generative AI is an issue that we don’t fully understand, or can only partially comprehend at this juncture.

In the case of responsible AI, it’s so incredibly vital that we develop strong guardrails, or else we may find yourself with a Frankenstein monster of Generative AI technologies. We’d like to spend the time considering through all the things that might go mistaken. Whether that’s bias creeping into the algorithms, or large language models themselves getting used by the mistaken people to do nefarious acts.

The technology itself isn’t good or bad, but it might be utilized by bad people to do bad things. For this reason investing the time and resources in AI ethicists to do adversarial testing to grasp the design faults is so critical. It will help us understand the best way to prevent abuse, and I believe that’s probably a very powerful aspect of responsible AI.

Because AI can’t yet think for itself, we’d like smart individuals who can construct these defaults when AI is being programmed. The vital aspect to contemplate right away is timing – we’d like these positive actors doing these items NOW before it’s too late.

Unlike other systems we’ve designed and built prior to now, AI is different because it might iterate and learn by itself, so if you happen to don’t arrange strong guardrails on what and the way it’s learning, we cannot control what it’d change into.

Right away, we’re seeing some big firms shedding ethics boards and responsible AI teams as a part of major layoffs. It stays to be seen how seriously these tech majors are taking the technology and the way seriously they’re reviewing the potential downfalls of AI of their decision making.

Is there the rest that you desire to to share about your work with Smartnews?

I joined SmartNews  because I feel in its mission, the mission has a certain purity to it. I strongly consider the world is becoming more polarized, and there is not enough media literacy today to assist combat that trend.

Unfortunately, there are too many individuals who take WhatsApp messages as gospel and consider them at face value. That may result in tremendous consequences, including—and particularly—violence. This all boils all the way down to people not understanding what they’ll and can’t consider.

If we don’t educate people, or inform them on the best way to make decisions on the trustworthiness of what they’re consuming. If we don’t introduce media literacy levels to discern between news and faux news, we’ll proceed to advocate the issue and increase the problems history has taught us to not do.

Some of the vital components of my work at SmartNews is to assist reduce polarization on the planet. I would like to meet the founder’s mission to enhance media literacy where they’ll understand what they’re consuming and make informed opinions in regards to the world and the numerous diverse perspectives.

1 COMMENT

LEAVE A REPLY

Please enter your comment!
Please enter your name here