Alex Fink is a Tech Executive and the Founder and CEO of the Otherweb, a Public Profit Corporation that uses AI to assist people read news and commentary, take heed to podcasts and search the online without paywalls, clickbait, ads, autoplaying videos, affiliate links, or another ‘junk’ content. Otherweb is obtainable as an app (iOS and Android), a web site, a newsletter, or a standalone browser extension. Prior to Otherweb, Alex was Founder and CEO of Panopteo and Co-founder and Chairman of Swarmer.
Are you able to provide an outline of Otherweb and its mission to create a junk-free news space?
Otherweb is a public profit corporation, created to assist improve the standard of data people eat.
Our important product is a news app that uses AI to filter junk out, and to permit users unlimited customizations – controlling every quality-threshold and each sorting mechanism the app uses.
In other words, while the remainder of the world creates black-box algorithms to maximise user engagement, we wish to provide users as much value in as little time as possible, and we make all the pieces customizable. We even made our AI models and datasets source-available so people can see exactly what we’re doing and the way we evaluate content.
What inspired you to concentrate on combating misinformation and faux news using AI?
I used to be born within the Soviet Union and saw what happens to a society when everyone consumes propaganda, and nobody has any idea what’s happening on the earth. I even have vivid memories of my parents waking up at 4am, locking themselves within the closet, and turning on the radio to take heed to Voice of America. It was illegal after all, which is why they did it at night and made sure the neighbors couldn’t hear – but it surely gave us access to real information. Because of this, we left 3 months before all of it got here tumbling down and war broke out in my hometown.
I actually remember seeing photos of tanks on the road I grew up on and considering “so that is what real information is price”.
I need more people to have access to real, high-quality information.
How significant is the specter of deepfakes, particularly within the context of influencing elections? Are you able to share specific examples of how deepfakes have been used to spread misinformation and the impact that they had?
Within the short term, it’s a really serious threat.
Voters don’t realize that video and audio recordings can not be trusted. They think video is evidence that something happened, and a couple of years ago this was still true, but now it’s obviously not the case.
This 12 months, in Pakistan, Imran Khan voters got calls from Imran Khan himself, personally, asking them to boycott the election. It was fake, after all, but many individuals believed it.
Voters in Italy saw one in all their female politicians appear in a pornographic video. It was fake, after all, but by the point the fakery was uncovered – the damage was done.
Even here in Arizona, we saw a newsletter advertise itself by showing an endorsement video starring Kari Lake. She never endorsed it, after all, however the newsletter still got hundreds of subscribers.
So come November, I feel it’s almost inevitable that we’ll see a minimum of one fake bombshell. And it’s very more likely to drop right before the election and turn into fake right after the election – when the damage has already been done.
How effective are current AI tools in identifying deepfakes, and what improvements do you foresee in the long run?
Up to now, one of the best solution to discover fake images was to zoom in and search for the characteristic mistakes (aka “artifacts”) image creators tended to make. Incorrect lighting, missing shadows, uneven edges on certain objects, over-compression across the objects, etc.
The issue with GAN-based editing (aka “deepfake”) is that none of those common artifacts are present. The way in which the method works is that one AI model edits the image, and one other AI model looks for artifacts and points them out – and the cycle is repeated over and yet again until there are not any artifacts left.
Because of this, there is mostly no solution to discover a well-made deepfake video by taking a look at the content itself.
We now have to alter our mindset, and to begin assuming that the content is simply real if we are able to trace its chain of custody back to the source. Consider it like fingerprints. Seeing fingerprints on the murder weapon just isn’t enough. You might want to know who found the murder weapon, who brought it back to the storage room, etc – you may have to have the opportunity to trace each time it modified hands and be sure it wasn’t tampered with.
What measures can governments and tech corporations take to forestall the spread of misinformation during critical times reminiscent of elections?
The perfect antidote to misinformation is time. For those who see something that changes things, don’t rush to publish – take a day or two to confirm that it’s actually true.
Unfortunately, this approach collides with the media’s business model, which rewards clicks even when the fabric seems to be false.
How does Otherweb leverage AI to make sure the authenticity and accuracy of the news it aggregates?
We’ve found that there’s a robust correlation between correctness and form. Individuals who need to tell the reality are inclined to use certain language that emphasizes restraint and humility, whereas individuals who disregard the reality attempt to get as much attention as possible.
Otherweb’s biggest focus just isn’t fact-checking. It’s form-checking. We select articles that avoid attention-grabbing language, provide external references for each claim, state things as they’re, and don’t use persuasion techniques.
This method just isn’t perfect, after all, and in theory a nasty actor could write a falsehood in the precise style that our models reward. But in practice, it just doesn’t occur. Individuals who need to tell lies also want lots of attention – that is the thing we’ve taught our models to detect and filter out.
With the increasing difficulty in discerning real from fake images, how can platforms like Otherweb help restore user trust in digital content?
The perfect solution to help people eat higher content is to sample from all sides, pick one of the best of every, and exercise lots of restraint. Most media are rushing to publish unverified information as of late. Our ability to cross-reference information from tons of of sources and concentrate on one of the best items allows us to guard our users from most types of misinformation.
What role does metadata, like C2PA standards, play in verifying the authenticity of images and videos?
It’s the one viable solution. C2PA may or is probably not the suitable standard, but it surely’s clear that the one solution to validate whether the video you’re watching reflects something that truly happened in point of fact, is to a) make sure the camera used to capture the video was only capturing, and never editing, and b) be sure that nobody edited the video after it left the camera. The perfect solution to try this is to concentrate on metadata.
What future developments do you anticipate within the fight against misinformation and deepfakes?
I feel that, inside 2-3 years, people will adapt to the brand new reality and alter their mindset. Before the nineteenth century, one of the best type of proof was testimony from eyewitnesses. Deepfakes are more likely to cause us to return to those tried-and-true standards.
With misinformation more broadly, I imagine it’s vital to take a more nuanced view and separate disinformation (i.e. false information that’s intentionally created to mislead) from junk (i.e. information that’s created to be monetized, no matter its truthfulness).
The antidote to junk is a filtering mechanism that makes junk less more likely to proliferate. It will change the motivation structure that makes junk spread like wildfire. Disinformation will still exist, just because it has all the time existed. We’ve been in a position to deal with it throughout the twentieth century, and we’ll have the opportunity to deal with it within the twenty first.
It’s the deluge of junk we’ve got to fret about, because that’s the part we’re ill-equipped to handle at once. That’s the important problem humanity needs to handle.
Once we modify the incentives, the signal-to-noise ratio of the web will improve for everybody.
Thanks for the good interview, readers who want to learn more should visit the Otherweb website, or follow them on X or LinkedIn.