Documentation
Everything you need to get started with Marcus Intelligence — AI-powered browser test automation.
Quick Start
Core Concepts
Generating Tests
- →Enter any publicly accessible URL — Marcus scrapes the page, infers functionality, and generates a suite of test cases.
- →Upload a BRD (PDF or plain text) to give Marcus domain context and acceptance criteria for more targeted test generation.
- →Deep Crawl mode: Marcus follows links across your site (up to 50 pages, 3 hops deep) before generating — best for SPAs and multi-step flows.
- →Login support in deep crawl: Provide credentials so Marcus can reach authenticated pages. Credentials are used in-memory only and never stored.
Executing Tests
- →Click "Run Suite" on the Execute tab — Marcus spawns a real browser for each test.
- →Tests run sequentially; each has a 3-minute timeout by default.
- →Live progress is visible in the Execute tab: current test name, elapsed time, and running pass/fail counts.
- →Results are saved to the database — access them any time from the Results tab.
Understanding Results
Export your results to CSV from the Results tab for use in bug trackers or test reports.
Deep Crawl
- →Toggle "Deep Crawl" before generating — Marcus crawls up to 50 pages using BFS (breadth-first), max 3 hops from the starting URL.
- →Uses the same browser engine as test execution — no extra browser installation needed.
- →Provide login credentials if your app requires authentication. Credentials are never stored or logged.
- →Default crawl timeout: 5 minutes (configurable via
MARCUS_CRAWL_TIMEOUTenv var on self-hosted deployments).
Plans & Limits
Bot Protection & Cloudflare
Many production sites use Cloudflare or similar WAF/bot-protection that blocks automated scrapers. Here's what to do when Marcus can't scrape your site.
Why scraping fails
Standard scraping sends plain HTTP requests without executing JavaScript. Cloudflare detects the non-browser signature and returns a 403 or JS challenge page instead of the real content.
Option 1 — Enable Deep Crawl (recommended first step)
- →Deep Crawl uses a real Chromium browser that executes JavaScript and mimics human browsing — it bypasses basic bot detection in most cases.
- →Enable Deep Crawl on the Generate tab before clicking Generate Tests.
Option 2 — Temporarily disable Cloudflare bot protection
- →In your Cloudflare dashboard: Security → Bots → set
Bot Fight Modeto Off for the duration of the scan, then re-enable. - →Or lower your Security Level: Security → Settings → Security Level → Essentially Off temporarily.
- →To allow Marcus permanently: in Security → WAF → Tools, add an
IP Access Rulewith actionAllowfor the Marcus server IP (find it in your Cloudflare analytics under the blocked requests).
Option 3 — BRD-only mode (no scraping required)
- →Upload a BRD (PDF / DOCX / TXT) describing your application and select BRD Only mode. Marcus generates tests from your requirements without ever scraping the site.
- →Test execution runs directly in a real browser and is unaffected by Cloudflare scraping protection — only test generation requires access to the live page.
FAQ
A Chromium-based browser. Firefox and Safari are not currently supported.
Only if your Marcus instance is self-hosted on the same network. The cloud version requires a publicly accessible URL.
No. Login credentials entered for deep crawl are used in-memory only and are never persisted to the database, logs, or session state.
Email info@marcusai.in or use the in-app chat on any plan.
Marcus Intelligence