Summary of "Perplexity's NEW Computer Just DESTROYED Every AI Tool You're Paying For"
High-level summary
Perplexity Computer (running in Perplexity’s Comet browser) is demonstrated via a hands-on review. The presenter gives five real-world jobs to the system, showing prompts, activity logs, and final outputs to highlight how the product orchestrates multiple AI models and connectors, runs multi-step workflows (agents) in the background, and integrates with the apps and data you already use so you don’t have to copy/paste between tools.
The video is a demo-style walkthrough that emphasizes real-world tasks, transparency of agent activity, parallel execution, and extensibility via a skills marketplace.
Key technological concepts and product features
-
Comet browser
- Perplexity’s browser extension/new-tab UI where “Computer” runs.
-
Tasks
- Workspace where each job is saved; multiple tasks can run concurrently.
- All outputs (reports, images, code, clips) are auto-saved.
-
Connectors
- Native integrations to external tools (Google Drive, Gmail, Slack, GitHub, Salesforce, Snowflake, CB Insights, etc.) allowing the AI to read/write to real data sources.
-
MCP (Model Context Protocol)
- A “universal language” layer that lets models interact with software/tools (analogy: universal remote). Enables connector interoperability.
-
Skills
- Pre-built capabilities (research assistant, media processor, finance markets, web app builder, data visualization, etc.).
- System auto-selects skills based on the task; users can add third-party skills from a marketplace (skills.sh).
-
Orchestrator model selection
- You can choose which model runs orchestration (example used: Claude Opus 4.6 for complex reasoning).
-
Credits system
- Tasks consume credits (example: a content strategy task cost 97 credits on the Pro plan; Pro ≈ 4,000 credits).
-
Activity log / transparency
- Live step-by-step log showing what agents do (which skills they launched, browsing activity, API calls). Not presented as a black box.
-
Parallelism + agents
- True parallel execution: multiple agent workflows run simultaneously.
- Agents can self-diagnose and self-fix code errors without human intervention.
-
Marketplace & extensibility
- Install new skills from skills.sh by pasting a URL; installed skills become available to tasks.
-
Auto media processing
- Transcription with timestamps, automated clip selection, video extraction, auto-generated thumbnails and headlines (presenter estimates ~80% useful out of the box).
-
Full-stack app generation
- Demonstrated building a working stock-educator full-stack app with zero user code by combining finance, web app, and data-visualization skills.
-
Data-source driven research
- Can pull premium data (e.g., CB Insights) via connectors, cross-reference web and paid sources, and produce formatted reports.
Concrete demos / tasks performed
-
YouTube content strategy audit
- Prompt: Analyze the top 5 performing videos + comments and recommend next videos.
- Process: Orchestrator launched a research assistant, browsed YouTube, read comments, pulled views/links.
- Output: Content strategy recommendation (70% mastery guides, 30% podcasts); cost = 97 credits.
-
YouTube clip generator + thumbnails
- Prompt: Analyze a full video, cut clips, and produce thumbnails.
- Process: Full transcription with timestamps; algorithmic selection of 5–6 clip-worthy segments; video extraction; autogenerated thumbnails and headlines (~80% quality).
-
Full-stack stock educator app
- Prompt: Build an app to input 3 stocks, visualize 10-year performance, map historical reasons for moves, and teach entry/exit points.
- Process: Installed an external stock-analysis skill from skills.sh; launched finance, web-app, and data-visualization skills. Encountered an Anthropic SDK import error mid-build; the agent read its own error log, rewrote code, re-ran the build, and finished without human input.
- Output: Working app URL with 10-year charts, 52-week high, current price, and event-mapped insights. Presenter cautions it’s for learning, not investment advice.
-
Market research across CB Insights + web
- Prompt: From market insights, identify 10–15 jobs most impacted by AI and the top 5 industries to build in.
- Process: Used the CB Insights connector and web sources simultaneously, cross-referenced data, and compiled a report.
- Output: Top industries identified include healthcare, financial services, and education, plus detailed role-level breakdowns and skills in demand.
-
Parallel task throughput demo (implied)
- Multiple internal tests run in parallel to illustrate throughput and compounding productivity.
Notable behaviors and claims
- Self-debugging agents: Agents can detect errors (e.g., SDK import failure), patch code, and continue without human intervention.
- Real browsing: The system actually navigates external sites (YouTube, CB Insights) rather than hallucinating results.
- Auto skill choice: The system picks which skills to use for a job; users can override the orchestrator model but may not always control skill choice.
- Parallel workflows compound productivity: Running many complex tasks simultaneously is a core differentiator.
- Orchestration layer: Perplexity Computer acts as a control/orchestration layer that runs existing LLMs (Claude, GPT, etc.) and integrates with enterprise tools; it isn’t positioned as a replacement for base models.
Performance and cost examples
- Example: The content strategy task cost 97 credits on the Pro plan (Pro ≈ 4,000 credits).
- The UI shows credit usage in real time during tasks.
Limitations and cautions
- Thumbnails and copywriting are not perfect initially (roughly 80% usable).
- Not a substitute for professional investment advice; stock analysis demo is educational only.
- Technical errors (e.g., SDK imports) can occur, though agents may self-repair during the workflow.
Comparisons and context
- Compared to OpenClaw (referenced): both trend toward local or hybrid model orchestration. Perplexity Computer runs in the cloud, while OpenClaw runs locally.
- Presenter referenced his separate “Cloudbot”/OpenClaw deep-dive videos for additional under-the-hood details.
Guides, tutorials, and resources mentioned
- Demonstrations covered:
- How to open Computer in the Comet browser and where to click.
- How to choose the orchestrator model.
- How to install skills from skills.sh.
- How to add connectors like CB Insights.
- Prompts used for each task were shared with the presenter’s WhatsApp community and linked in the video description.
- Links promised to related breakdown videos (Cloudbot/OpenClaw) and the creator’s channel subscription.
Main speakers and referenced systems
- Presenter: Vibhav Cicinti (the video host and demonstrator).
- Products / systems: Perplexity Computer, Comet browser, MCP (Model Context Protocol), skills.sh.
- Models & SDKs referenced: Claude Opus 4.6 (Anthropic/Claude), ChatGPT (referenced), Anthropic SDK (error encountered).
- Data/connectors: Google Drive, Gmail, Slack, GitHub, Salesforce, Snowflake, CB Insights.
- Other referenced projects/videos: “Cloudbot” breakdown, OpenClaw.
Category
Technology
Share this summary
Is the summary off?
If you think the summary is inaccurate, you can reprocess it with the latest model.