A research assistant that
reads the papers so you can think.
arXiv, Semantic Scholar, OpenAlex, and your own notes in one cited chat. Topic → tree-of-papers → structured lit review with 50+ citations and open questions, in under 5 minutes.
No credit card required. Cancel anytime.
The 5-minute lit review
Streamed, cited, approval-gated where it matters.
> topic: "sparse attention mechanisms for long-context LLMs, post-2023" retrieve → 218 papers (arXiv + SemScholar + OpenAlex) ✓ cluster → 6 thematic branches ✓ cite → every claim hyperlinked to a DOI ✓ open-qs → 11 unresolved research questions ✓ Lit review — "Sparse attention for long-context LLMs" [tree] I. Token-level sparsity 24 papers — Sliding window (Longformer, Big Bird, Longformer 2) — Learned sparsity (Reformer, Linformer) Key finding: O(n·√n) attention viable past 128K tokens. II. Hierarchical / recursive attention 41 papers — RetNet, RWKV, Mamba, Mamba-2 — State-space models: linear-time, constant memory. III. KV-cache compression 32 papers — H2O, StreamingLLM, ScissorHands. — Trade: recall vs memory; open at needle-in-haystack. IV. Infini-attention + compressive memory 18 papers V. Ring attention + tensor parallelism 27 papers VI. Empirical benchmarks (MQAR, RULER, RU32k) 76 papers Open questions (11 total, top 3): Q1. No consensus method for benchmarking past 1M tokens. Q2. How do sparse attention + KV compression interact? Q3. Energy / cost parity vs dense attention unclear. Export: BibTeX, CSL JSON, Notion, Obsidian markdown.
What it actually does
Tree-of-papers lit review
Paste a topic or paste 10 seed papers. The engine retrieves related work, clusters into thematic branches, and drafts a structured review with every claim cited inline.
Citations-or-refuse
Every factual claim cites its source. When the corpus doesn't support a claim, the engine says 'insufficient evidence' rather than hallucinate. Perfect for writing where your committee will ask 'where'd that come from?'
Methods + datasets graph
For empirical domains, we extract methods + benchmarks + datasets across papers and build a comparison graph. Drop a new paper in and see how its setup compares to the 50 before it.
Your personal corpus
Upload your own PDFs, annotations, and drafts. The chat has access to them alongside arXiv + SemScholar. Your notes stay private to your workspace.
Connected to the sources you already use
Pricing
Monthly flexibility, annual discount available.
- Unlimited lit reviews + cited chat
- 50 PDF uploads (10MB each)
- BibTeX + CSL export
- Notion + Obsidian sync
- 14-day free trial, no credit card
- Everything in Student
- Shared lab workspace + KB
- Unlimited uploads (500GB)
- Team citation graph
- SSO (SAML via WorkOS)
- Priority support
FAQ
How is this different from Elicit or Consensus?
Elicit is best at structured Q&A over papers; Consensus is consensus-claim-finding; Perplexity Pro is general-purpose with paper access. We combine all three — tree-of-papers review, consensus-claim detection, and cited chat — and we're the only one where uploaded PDFs live in the same search index as arXiv.
Will it fabricate citations?
No. Citation-required policy: the engine hard-refuses to output a citation that isn't in the retrieved corpus. Fabrication rate is measurably zero on our eval set — we publish the Ragas + DeepEval scores monthly.
Can it write my dissertation?
It can draft sections with full citations, and we provide a Notion/Obsidian export. Final writing is on you — academic integrity policies vary and we don't want to be the reason your committee loses confidence. Use it as a very fast literature search + outline tool.
How current is the corpus?
arXiv: updated daily. Semantic Scholar: weekly. OpenAlex: weekly. PubMed: daily. Your own uploads: the moment you finish uploading.
Student pricing — what qualifies?
A valid .edu (or national equivalent) email. We verify at signup. Not a student? The $99 PI tier is still far below any Bloomberg-class enterprise seat.
Ready to try it?
14-day free trial. No credit card. Cancel in one click.
Start free trial