Fetch Consolidates AI Tools and Saves 30% Development Time with Hugging Face on AWS

-


Violette's avatar


Should you need support in using Hugging Face and AWS, please get in contact with us here – our team will contact you to debate your requirements!



Executive Summary

Fetch, a consumer rewards company, developed about 15 different AI tools to assist it receive, route, read, process, analyze, and store receipts uploaded by users. The corporate has greater than 18 million lively monthly users for its shopping rewards app. Fetch desired to rebuild its AI-powered platform and, using Amazon Web Services (AWS) and with the support of AWS Partner Hugging Face, moved from using third-party applications to developing its own tools to realize higher insights about customers. Consumers scan receipts —or forward electronic receipts— to receive rewards points for his or her purchases. Businesses can offer special rewards to users, corresponding to extra points for purchasing a specific product. The corporate can now process greater than 11 million receipts per day faster and gets higher data.



Fetch Needed a Scalable Technique to Train AI Faster

Fetch—formerly Fetch Rewards—has grown since its founding to serve 18 million lively users every month who scan 11 million receipts day-after-day to earn reward points. Users simply take an image of their receipt and upload it using the corporate’s app. Users may upload electronic receipts. Receipts earn points; if the receipt is from a brand partner of Fetch, it might qualify for promotions that award additional points. Those points could be redeemed for gift cards from quite a lot of partners. But scanning is just the start. Once Fetch receives the receipts, it must process them, extracting data and analytics and filing the information and the receipts. It has been using artificial intelligence (AI) tools running on AWS to try this.

The corporate was using an AI solution from a 3rd party to process receipts, but found it wasn’t getting the information insights it needed. Fetch’s business partners wanted details about how customers were engaging with their promotions, and Fetch didn’t have the granularity it needed to extract and process data from hundreds of thousands of receipts day by day. “Fetch was using a third-party provider for its brain, which is scanning receipts, but scanning isn’t enough,” says Boris Kogan, computer vision scientist at Fetch. “That solution was a black box and we had no control or insight into what it did. We just got results we had to just accept. We couldn’t give our business partners the data they wanted.”

Kogan joined Fetch tasked with the job of constructing thorough machine learning (ML) and AI expertise into the corporate and giving it full access to all elements of the information it was receiving. To do that, he hired a team of engineers to bring his vision to life. “All of our infrastructure runs on AWS, we also depend on the AWS products to coach our models,” says Kogan. “When the team began working on making a brain of our own, after all, we first had to coach our models and we did that on AWS. We allocated 12 months for the project and accomplished it in 8 month because we at all times had the resources we wanted.”



Hugging Face Opens Up the Black Box

The Fetch team engaged with AWS Partner Hugging Face through the Hugging Face Expert Acceleration Program on the AWS Marketplace to assist Fetch unlock latest tools to power processes after the scans had been uploaded. Hugging Face is a pacesetter in open-source AI and provides guidance to enterprises on using AI. Many enterprises, including Fetch, use transformers from Hugging Face, which permit users to coach and deploy open-source ML models in minutes. “Quick access to Transformers models is something that began with Hugging Face, and so they’re great at that,” says Kogan. The Fetch and Hugging Face teams worked to discover and train state-of-the-art document AI models, improving entity resolution and semantic search.

On this relationship, Hugging Face acted in an advisory capability, transferring knowledge to assist the Fetch engineers use its resources more effectively. “Fetch had an awesome team in place,” says Yifeng Yin, machine learning engineer at Hugging Face. “They didn’t need us to are available in and run the project or construct it. They desired to learn how one can use Hugging Face to coach the models they were constructing. We showed them how one can use the resources, and so they ran with it.” With Yifeng’s guidance, Fetch was capable of cut its development time by 30 percent.

Since it was constructing its own AI and ML models to take over from the third-party ‘brain’, it needed to make sure a sturdy system that produced good results before switching over. Fetch required doing this without interrupting the flow of hundreds of thousands of receipts day-after-day. “Before we rolled anything out, we built a shadow pipeline,” says Sam Corzine, lead machine learning engineer at Fetch. “We took all of the things and reprocessed them in our latest ML pipeline. We could do audits of every part. It was running full volume, reprocessing all of those 11 million receipts and doing analytics on them for quite some time before anything made it into the primary data fields. The black box was still running the show and we were checking our results against it.” The answer uses Amazon SageMaker—which lets businesses construct, train, and deploy ML models for any use case with fully managed infrastructure, tools, and workflows. It also uses AWS Inferentia accelerators to deliver high performance at the bottom cost for deep learning (DL) inference applications.



Fetch Grows AI Expertise, Cuts Latency by 50%, and Saves Costs

Fetch’s commitment to developing in-house ML and AI capabilities has resulted in several advantages, including some cost savings, but more necessary is the event of a service that higher serves the needs of the purchasers. “With any app you will have to present the shopper a reason to maintain coming back,” says Corzine. “We’ve improved responsiveness for purchasers with faster processing of uploads, cutting processing latency by 50 percent. Should you keep customers waiting too long, they’ll disengage. And the more customers use Fetch, the higher understanding we and our partners get about what’s necessary to them. By constructing our own models, we get details we never had before.”

The corporate can now train a model in hours as a substitute of the days or even weeks it used to take. Development time has also been reduced by about 30 percent. And while it is probably not possible to place a number to it, one other major profit has been making a more stable foundation for Fetch. “Counting on a third-party black box presented considerable business risk to us,” says Corzine. “Because Hugging Face existed and its community existed, we were capable of use that tooling and work with that community. At the top of the day, we now control our destiny.”

Fetch is constant to enhance the service to customers and gain a greater understanding of customer behavior now that it’s an AI-first company, fairly than an organization that uses a third-party AI ‘brain’. “Hugging Face and AWS gave us the infrastructure and the resources to do what we’d like,” says Kogan. “Hugging Face has democratized transformer models, models that were nearly unimaginable to coach, and made them available to anyone. We couldn’t have done this without them.”

This text is a cross-post from an originally published post on February 2024 on AWS’s website.



Source link

ASK ANA

What are your thoughts on this topic?
Let us know in the comments below.

0 0 votes
Article Rating
guest
0 Comments
Oldest
Newest Most Voted
Inline Feedbacks
View all comments

Share this article

Recent posts

0
Would love your thoughts, please comment.x
()
x