For the reason that general AI agent Manus was launched last week, it has spread online like wildfire. And not only in China, where it was developed by the Wuhan-based startup Butterfly Effect. It’s made its way into the worldwide conversation, with influential voices in tech, including Twitter cofounder Jack Dorsey and Hugging Face product lead Victor Mustar, praising its performance. Some have even dubbed it “the second DeepSeek,” comparing it to the sooner AI model that took the industry by surprise for its unexpected capabilities in addition to its origin.
Manus claims to be the world’s first general AI agent, leveraging multiple AI models (similar to Anthropic’s Claude 3.5 Sonnet and fine-tuned versions of Alibaba’s open-source Qwen) and various independently operating agents to act autonomously on a wide selection of tasks. (This makes it different from AI chatbots, including DeepSeek, that are based on a single large language model family and are primarily designed for conversational interactions.)
Despite all of the hype, only a few people have had a likelihood to make use of it. Currently, under 1% of the users on the wait list have received an invitation code. (It’s unclear what number of persons are on this list, but for a way of how much interest there’s, Manus’s Discord channel has greater than 186,000 members.)
was in a position to obtain access to Manus, and once I gave it a test-drive, I discovered that using it seems like collaborating with a very smart and efficient intern: While it occasionally lacks understanding of what it’s being asked to do, makes incorrect assumptions, or cuts corners to expedite tasks, it explains its reasoning clearly, is remarkably adaptable, and might improve substantially when supplied with detailed instructions or feedback. Ultimately, it’s promising but not perfect.
Identical to its parent company’s previous product, an AI assistant called Monica that was released in 2023, Manus is meant for a world audience. English is ready because the default language, and its design is clean and minimalist.
To get in, a user has to enter a legitimate invite code. Then the system directs users to a landing page that closely resembles those of ChatGPT or DeepSeek, with previous sessions displayed in a left-hand column and a chat input box in the middle. The landing page also features sample tasks curated by the corporate—starting from business strategy development to interactive learning to personalized audio meditation sessions.
Like other reasoning-based agentic AI tools, similar to ChatGPT DeepResearch, Manus is able to breaking tasks down into steps and autonomously navigating the online to get the knowledge it needs to finish them. What sets it apart is the “Manus’s Computer” window, which allows users not only to watch what the agent is doing but additionally to intervene at any point.
To place it to the test, I gave Manus three assignments: (1) compile a listing of notable reporters covering China tech, (2) seek for two-bedroom property listings in Recent York City, and (3) nominate potential candidates for Innovators Under 35, a listing created by yearly.
Here’s the way it did:
Task 1: The primary list of reporters that Manus gave me contained only five names, with five “honorable mentions” below them. I noticed that it listed some journalists’ notable work but didn’t do that for others. I asked Manus why. The rationale it offered was hilariously easy: It got lazy. It was “partly because of time constraints as I attempted to expedite the research process,” the agent told me. Once I insisted on consistency and thoroughness, Manus responded with a comprehensive list of 30 journalists, noting their current outlet and listing notable work. (I used to be glad to see I made the cut, together with lots of my beloved peers.)
I used to be impressed that I used to be in a position to make top-level suggestions for changes, much as someone would with a real-life intern or assistant, and that it responded appropriately. And while it initially missed changes in some journalists’ employer status, once I asked it to revisit some results, it quickly corrected them. One other nice feature: The output was downloadable as a Word or Excel file, making it easy to edit or share with others.
Manus hit a snag, though, when accessing journalists’ news articles behind paywalls; it regularly encountered captcha blocks. Since I used to be in a position to follow along step-by-step, I could easily take over to finish these, though many media sites still blocked the tool, citing suspicious activity. I see potential for major improvements here—and it might be useful if a future version of Manus could proactively ask for help when it encounters these kinds of restrictions.
Task 2: For the apartment search, I gave Manus a fancy set of criteria, including a budget and several other parameters: a spacious kitchen, outdoor space, access to downtown Manhattan, and a significant train station inside a seven-minute walk. Manus initially interpreted vague requirements like “some kind of outside space” too literally, completely excluding properties with out a private terrace or balcony access. Nevertheless, after more guidance and clarification, it was in a position to compile a broader and more helpful list, giving recommendations in tiers and neat bullet points.
The ultimate output felt straight from , containing subtitles like “best overall,” “best value,” and “luxury option.” This task (including the back-and-forth) took lower than half an hour—lots less time than compiling the list of journalists (which took a little bit over an hour), likely because property listings are more openly available and well-structured online.
Task 3: This was the biggest in scope: I asked Manus to nominate 50 people for this 12 months’s Innovators Under 35 list. Producing this list is an unlimited undertaking, and we typically get tons of of nominations yearly. So I used to be curious to see how well Manus could do. It broke the duty into steps, including reviewing past lists to grasp selection criteria, making a search strategy for identifying candidates, compiling names, and ensuring a various collection of candidates from all around the world.
Developing a search strategy was essentially the most time-consuming part for Manus. While it didn’t explicitly outline its approach, the Manus’s Computer window revealed the agent rapidly scrolling through web sites of prestigious research universities, announcements of tech awards, and news articles. Nevertheless, it again encountered obstacles when attempting to access academic papers and paywalled media content.
After three hours of scouring the web—during which Manus (understandably) asked me multiple times whether I could narrow the search—it was only able to present me three candidates with full background profiles. Once I pressed it again to offer an entire list of fifty names, it will definitely generated one, but certain academic institutions and fields were heavily overrepresented, reflecting an incomplete research process. After I identified the difficulty and asked it to seek out five candidates from China, it managed to compile a solid five-name list, though the outcomes skewed toward Chinese media darlings. Ultimately, I had to present up after the system warned that Manus’s performance might decline if I kept inputting an excessive amount of text.
My assessment: Overall, I discovered Manus to be a highly intuitive tool suitable for users with or without coding backgrounds. On two of the three tasks, it provided higher results than ChatGPT DeepResearch, though it took significantly longer to finish them. Manus seems best suited to analytical tasks that require extensive research on the open web but have a limited scope. In other words, it’s best to keep on with the kinds of things a talented human intern could do during a day of labor.
Still, it’s not all smooth sailing. Manus can suffer from frequent crashes and system instability, and it could struggle when asked to process large chunks of text. The message “Attributable to the present high service load, tasks can’t be created. Please try again in a couple of minutes” flashed on my screen a couple of times once I tried to begin latest requests, and sometimes Manus’s Computer froze on a certain page for a protracted time frame.
It has a better failure rate than ChatGPT DeepResearch—an issue the team is addressing, according to Manus’s chief scientist, Peak Ji. That said, the Chinese media outlet reports that Manus’s per-task cost is about $2, which is just one-tenth of DeepResearch’s cost. If the Manus team strengthens its server infrastructure, I can see the tool becoming a preferred alternative for individual users, particularly white-collar professionals, independent developers, and small teams.
Finally, I feel it’s really worthwhile that Manus’s working process feels relatively transparent and collaborative. It actively asks questions along the best way and retains key instructions as “knowledge” in its memory for future use, allowing for an easily customizable agentic experience. It’s also very nice that every session is replayable and shareable.
I expect I’ll keep using Manus for all kinds of tasks, in each my personal and skilled lives. While I’m undecided the comparisons to DeepSeek are quite right, it serves as further evidence that Chinese AI firms will not be just following within the footsteps of their Western counterparts. Moderately than simply innovating on base models, they’re actively shaping the adoption of autonomous AI agents in their very own way.
