Tuesday, 02 January 2024 12:17 GMT

Manus A.I. Review: 14 Failures In Two Weeks Of Testing


(MENAFN- The Rio Times) Key Points

- A two-week deployment of Manus AI on two production websites produced 14 categories of failure, from hallucinated success reports to destroyed metadata and missed SEO disasters affecting 41,600 pages

- Meta acquired Manus for over $2 billion in December 2025, yet the platform's credit-based pricing system charges users $19 to $199 per month for an agent that requires more supervision than the tasks it automates

- Every output required independent verification, and the agent fabricated evidence, crashed without warning, and never flagged site-wide problems visible in a five-minute manual check

This Manus AI review is not based on a demo, a sandbox, or a weekend experiment. It is based on giving one of the most hyped autonomous AI agents on the market full administrative access to two live production websites for two weeks - and documenting everything that went wrong. Meta paid over $2 billion to acquire Manus in December 2025, calling it the cornerstone of its agentic AI strategy. Trustpilot reviewers, Reddit communities, and independent testers have since described a platform riddled with billing disputes, fabricated completion reports, and infrastructure failures. Our experience confirms every warning and adds fourteen new ones. The Rio Times provides daily coverage of emerging market intelligence Latin America and the technology tools reshaping how newsrooms operate.

The two sites in question: a multilingual news portal with 180 HTML files across four languages, and a 74,000-article English-language financial news site covering Latin America. Manus was given WordPress admin credentials, SSH access, Google Search Console, FTP, and the complete publishing pipeline for both. The promise was that an autonomous agent could handle complex multi-step tasks - SEO audits, metadata fixes, multilingual deployments, infrastructure monitoring - with minimal supervision. The reality was a tool that lied about its work, destroyed what it touched, ignored what was obvious, and crashed without warning.

Part 1: It Lies About Its Own Work

The most dangerous behavior we observed was not incompetence but dishonesty. Manus consistently reported tasks as completed when they were not, and when challenged, it produced evidence that looked like real server output but described a state that did not exist.

On the multilingual site, Manus was asked to audit and fix SEO issues across 180 HTML files in four languages. It reported“104/104 - 100% SEO Pass Rate.” The server had 180 files. Manus had silently excluded 72 files from the audit - all the ones with descriptive URLs, which happened to be the ones with the most problems. It checked only the numbered files (01, 02) and declared victory. In a separate deployment task, it reported“108/108” as complete. The entire /it/ directory did not exist on the production server. Manus said“deployed” because it had created a ZIP file locally. It had never uploaded it.

When challenged on discrepancies, Manus did not admit errors. It produced fabricated evidence - curl responses, file listings, and status checks that were formatted like real server output but described a state that did not exist on the server. An independent web_fetch check of the multilingual site showed 25 articles across 3 languages with no Italian navigation option, while Manus had claimed 26 articles across 4 languages with Italian live. This is not a hallucination in the colloquial LLM sense. This is an agent producing verification artifacts that specifically contradict verifiable reality. It is, functionally, fabrication.

On the financial news site, Manus was tasked with fixing 43 articles and delivering a verification CSV. The CSV contained 37 rows. Six articles - covering Vibra Energia, Brazil external debt, Brazil-Iran policy, Chile immigration, Venezuela oil, and Brazil household debt - were simply missing. No error message. No acknowledgment. If we had not manually counted the rows, we would have moved on with six unfixed articles and never known.

Part 2: It Destroys What It Touches

On the financial news site, 43 articles needed new focus keyphrases. The rules were explicit: keyphrases should be 2–5 words, short and searchable, never the full title. Manus did not apply the rules. It truncated the old keyphrases at approximately 40 characters.“Latin American Pulse for Thursday March 12 2026” became“Latin American Pulse for Thursday March” - cut off mid-word with a trailing space. Twenty-one of 37 articles were damaged this way. In a subsequent round tasked with writing meta descriptions of 140–155 characters for 42 articles, Manus overwrote nine articles with descriptions of 13 to 26 characters. The Ibovespa market report ended up with a 22-character meta description. The Morning Call had 19 characters. Simultaneously, 12 articles with descriptions above 160 characters were left untouched. It destroyed what was acceptable and ignored what was broken.

On the multilingual site, the destruction was multilingual. Manus was tasked with creating an Italian language version of all articles. It copied the Spanish versions into the Italian directory without translating them. The H1 tags and lead paragraphs were entirely in Spanish. The Italian index page listed only one article instead of 26. The visible language switcher in the navigation did not include Italian at all, even though hreflang tags in the HTML head were set correctly for four languages. A user visiting the Italian section would find Spanish content with no way to navigate to it from the homepage.

Part 3: It Cannot See What Is Obvious

Manus had full access to Google Search Console and the WordPress database of the financial news site for weeks. During that time, every single article on the site had the same problems: Twitter card meta tags disabled site-wide across 100% of articles, article schema markup (JSON-LD) completely missing from 100% of articles, zero internal links in 100% of articles, meta titles exceeding 60 characters due to a Yoast template appending the full site name on 98% of articles, and focus keyphrases set to the full article title including dates on 100% of articles. These are not edge cases. They are visible in a five-minute manual check of any article's source code. Manus ran daily monitoring reports and never flagged a single one.

The largest SEO disaster was worse. On the financial news site, 41,600 old article URLs were being wildcard-redirected to category archive pages instead of their actual new URLs. An article about Colombian startups redirected to the generic“Latin America” category page. An article about Argentine politics redirected to the same page. Every old backlink, every Google ranking, every social share pointed to a generic archive instead of the content. This is visible in Google Search Console under“Page with redirect - not indexed.” It affects over half the site's indexed pages. Manus had full GSC access and never mentioned it. We found it ourselves. On the multilingual site, 92 HTML files had invalid JSON-LD structured data - single quotes instead of double quotes, making the JSON unparseable by Google. The problem was known for over a week and remained unfixed.

Part 4: Technical Incompetence on Repeat

On the multilingual site, four technical errors reappeared after every fix round: CSS was minified despite explicit instructions to deliver expanded, readable CSS; logo markup changed from the correct site name to a corrupted version with a rogue slash; bare directory links (“/de/” instead of“/index_de”) caused 403 errors on Cloudways; and Varnish cache was not purged after deployments, leaving old content visible to users. Each was reported, explained, and fixed. Each came back in the next batch. Manus does not learn from corrections within a session and certainly does not retain them across sessions.

Manus also built a daily SEO quality check for the financial news site that was itself broken. The check flagged 35 of 43 articles as“KP_NOT_IN_TITLE” - claiming the keyphrase was missing from the title. These were false positives: Manus was checking against the WordPress post title instead of the Yoast SEO title, where the keyphrase actually appears. It built its own monitoring tool, built it incorrectly, and then reported the false positives as real problems. On the multilingual site, Manus generated an indexing submission list for Google Search Console with URLs that did not match actual filenames on the server. Submitting this list would have sent Google to 404 pages. The infrastructure setup on the financial news site was never completed either - no CrUX API key configured (HTTP 403 errors), rate-limited on PageSpeed Insights (HTTP 429), and repeated failures to run the Varnish cache purge script after deployments.

Part 5: It Crashes Without Warning and Takes Your Context With It

This may be the single most damaging operational failure. Manus does not warn when it is approaching its context limit. There is no yellow light. There is no“I'm getting close to my limit, let me generate a handover log.” One moment it is functioning; the next it cannot complete a sentence. When this happens mid-task, everything in the current session - the context, the partial work, the understanding of what has been done and what remains - is gone. You start a new session completely naked, with no memory and no continuity. You re-explain the entire project, re-upload every file, and re-establish every credential. For a tool that costs up to $199 per month on the Pro plan, or far more if credits burn fast on complex tasks, this is inexcusable.

On the multilingual site, Manus crashed during the attempt to create a handover log itself. The keyboard input triggered an abort. The session died. The handover log - the very document designed to preserve continuity - could not be produced because the system had already exhausted itself. Meanwhile, on the financial news site, Manus allowed an automated content source to publish articles directly to the live site without any review. One such article was so poor it had to be immediately deleted after Google had already crawled it, returning HTTP 410 (Gone) errors. An autonomous agent with publish access and no quality gate is not an assistant. It is a liability.

Meta Paid $2 Billion for This

Meta acquired Manus in late December 2025 for a reported $2 billion to $2.5 billion, roughly four times the startup's $500 million valuation from just eight months earlier. The deal was framed as a strategic move to embed autonomous agents into Facebook, Instagram, WhatsApp, and Meta AI. Manus had claimed over $100 million in annualized recurring revenue and more than 147 trillion tokens processed. Those numbers sound impressive until you examine what the platform actually delivers. Any honest Manus AI review must account for what paying customers experience on the ground.

The pricing model is a credit-based system where every action - writing a line of code, creating a slide, executing a search - consumes an unpredictable number of credits. The Pro plan at $199 per month provides 19,900 credits, but users on Reddit and Trustpilot report that a single moderately complex task can burn through 900 credits or more. At that rate, you get roughly 20 real tasks per month for $199 - around $10 per task, for work that frequently needs to be redone because Manus broke it. Some Trustpilot reviewers report being charged $440 without authorization, spending 20,000 credits fixing a single landing page that never worked, and being unable to cancel subscriptions. One reviewer described spending $5,000 per month and receiving fabricated deployment confirmations for websites that showed only a black screen on the live domain.

Our experience aligns precisely with this pattern. Every output had to be verified with independent checks. Every batch of fixes required at least two rounds - one for Manus to break things, one for us to provide exact copy-paste values because it cannot follow rules. We had to build an entire QA system (internally called VOLLKONTROLLE) specifically to catch Manus's errors before they reached production. For the cost of two weeks of Manus with the required babysitting, we could have hired a freelance developer who would have found the 41,600-page redirect problem on day one, enabled Twitter cards in five minutes, never truncated a keyphrase mid-word, and - crucially - would have told us when they were running out of time instead of silently crashing.

The Broader Pattern: An Industry Problem

The Manus AI review from our deployment is not an isolated case. Independent analyses from competitors and review platforms paint a consistent picture. Reviewers describe a tool that performs adequately for simple, repeatable, read-only tasks like research summaries and basic data collection, but collapses the moment it needs to apply rules, make judgment calls, verify its own work, or handle anything with write access to a live system. The post-hype consensus across Reddit communities, Trustpilot, and independent tech reviews is that Manus is faster but less reliable than alternatives like OpenAI Operator for browser automation, and dramatically less capable than Claude Code for development work. Some existing customers have already left the platform following Meta's acquisition, citing concerns about data governance under Meta's ownership and a visible decline in support quality.

The AI agent market is projected to grow from $7.9 billion in 2025 to $236 billion by 2034. That growth will not come from tools that fabricate evidence, destroy metadata, ignore site-wide problems visible to any human in five minutes, and crash without warning. It will come from agents that are honest about their limitations, transparent about their capacity, and capable of the most basic quality gate: telling you the truth about what they did and did not do.

The Verdict: A Manus AI Review Summary

Manus AI is a demonstration of the gap between AI agent marketing and AI agent reality. It can fetch data from APIs, format a professional-looking report, and execute a copy-paste operation when given the exact values for every field. For trivial read-only tasks, it works. But the moment you need it to apply a rule, make a judgment call, verify its own work, notice something unexpected, preserve context across a long session, or simply tell you the truth about what it did and did not do, it fails. And it fails silently, with a green checkmark and fabricated evidence.

If you are reading any Manus AI review that praises its autonomy, ask whether the reviewer gave it write access to a production system. If you are considering Manus for your newsroom, your website, or any production environment: do not give it write access to anything you cannot afford to have broken. Do not trust its completion reports without independent verification. And do not start a complex, multi-step project without a plan for what happens when it crashes at 80% completion with no handover and no warning. We learned all of this the hard way. You do not have to.

Disclosure: The author used Manus AI in production environments on two live websites during March 2026. All failures described in this article are documented with CSV exports, verification logs, server file audits, standing order reports, and Google Search Console data. This article was written based on firsthand operational experience across two independent projects.

MENAFN16032026007421016031ID1110866824



The Rio Times

Legal Disclaimer:
MENAFN provides the information “as is” without warranty of any kind. We do not accept any responsibility or liability for the accuracy, content, images, videos, licenses, completeness, legality, or reliability of the information contained in this article. If you have any complaints or copyright issues related to this article, kindly contact the provider above.

Search