Early access

Turn any topic into a
structured research course.

Gregator is for people who want to actually learn things, not just get AI summaries. Enter a topic, and in a few minutes you get a multi-chapter course built from real sources — with credibility scores, inline citations, and an AI companion to discuss it all with.

A real course, not a chat reply

Here is an actual chapter from a Gregator course. Each claim links to a source. Each source has a credibility score. You read it like an article, not a conversation.

8 sources 5 peer-reviewed 3 min read

Chapter 2: Lipid Nanoparticle Delivery

Naked mRNA is inherently unstable in biological environments. Ribonucleases in the bloodstream degrade unprotected mRNA within minutes, making direct injection ineffective for most therapeutic applications.[1] The solution is encapsulation in lipid nanoparticles (LNPs) — roughly 80–100nm spheres composed of ionizable lipids, helper lipids, cholesterol, and PEG-lipid conjugates.[2]

How LNPs enter cells

LNPs are taken up by cells through receptor-mediated endocytosis. Once inside the endosome, the ionizable lipid component becomes positively charged in the acidic environment (pH ~5–6), destabilizing the endosomal membrane and releasing the mRNA cargo into the cytoplasm.[1][3] This is where ribosomes can translate the mRNA into the target protein — in the case of COVID-19 vaccines, the SARS-CoV-2 spike protein.

Strong evidence
Endosomal escape mechanism confirmed across 3 independent cryo-EM studies (2019–2021) with consistent results.

Stability and storage challenges

Despite their effectiveness, LNPs introduce practical constraints. The lipid formulation is sensitive to temperature — the BNT162b2 (Pfizer-BioNTech) vaccine originally required storage at -70°C, though reformulation improved this to standard refrigeration for shorter periods.[4]

Mixed evidence on reformulation
Two sources cite improved stability at 2-8°C for 10 weeks; one FDA review notes the data is based on a single lot study.
Read a full sample course: "How does mRNA vaccination work?" →

Topic in, course out. A few minutes.

You type a topic. Gregator runs a research pipeline and shows you progress in real time. When it finishes, you have a course you can read and discuss.

1

Enter your topic

A question, a concept, a field. Gregator breaks it into chapters — typically 3 to 8 depending on complexity — and decides what angles each one needs.

2

Sources are gathered and scored

For each chapter, Gregator queries academic databases, web search, and news in parallel. Sources are deduplicated, then each one is scored for credibility: evidence type, publication tier, and whether it contains marketing language.

3

Chapters are synthesized

Claims from the strongest sources are extracted and assembled into structured narratives. Every claim has an inline citation. Where evidence conflicts, Gregator flags it rather than hiding the disagreement.

4

Read, discuss, track progress

You get a multi-chapter course you can read at your own pace. Open any source to see its full assessment. Chat with Greg to go deeper. Your progress is tracked from first encounter to mastery.

How does mRNA vaccination work? Researching
Decomposed into 5 chapters 1.2s
Search queries dispatched 0.8s
42 sources gathered and deduplicated 3.4s
Scoring credibility — 18 of 24 done ~20s
Writing chapter narratives

Tools to go deeper over time

The course is the core. These features help you build on it — connecting topics, revisiting what you've learned, and staying current.

Course companion

Greg is an AI chat partner loaded with your course context — the chapters, the sources, the specific claims. Ask follow-up questions and Greg answers from your research, not from general knowledge. When Greg cites something, you can click through to the original source and its credibility score.

Knowledge graph

As you research more topics, Gregator maps how they relate — prerequisites, contradictions, shared sources. Visual and list views.

Horizon newsletter

Weekly recommendations based on your research history. Topics adjacent to what you've studied that you might not have found on your own.

Intelligent scoring — here's what you see when you open a source

Lipid Nanoparticle Delivery Systems for mRNA Therapeutics Peer-reviewed
Source type Peer-reviewed journal article — Nature Reviews Drug Discovery, 2021
Evidence Multi-cohort clinical data. 95% confidence intervals reported. 3 independent trials cited.
Signals
Quantitative claims backed Multiple independent citations Single-disease focus
Flags None detected. No promotional language, no conflicts of interest disclosed in the scoring pipeline.

Every source in your course has a panel like this. Source type (peer-reviewed, institutional, journalism, community), evidence quality, positive and negative signals, and any flags the scoring pipeline found — marketing language, missing citations, or single-source claims.

Multi-source aggregation

Academic databases, web search, and news are queried in parallel for each chapter. Results are deduplicated and ranked before synthesis begins.

Greg knows what you've
been reading

Greg is loaded with your course context — the chapters, sources, and claims from your research. When you ask a question, Greg answers from that material and cites the specific sources. You can click any citation to see the full source assessment.

Scoped to your research — Greg references the sources from your course, not generic training data. If a source is weak, Greg says so.
Devil's advocate mode — asks Greg to push back on the conclusions in your course and surface counterarguments from the sources.
Greg Ch. 2 — Lipid Nanoparticles
The chapter says endosomal escape is well-established. How confident should I be in that?
Fairly confident for the core mechanism. Three independent cryo-EM studies (Hou 2021, Eygeris 2022, Yanez Arteta 2019) show consistent results for ionizable lipid protonation triggering membrane disruption.

The uncertainty is in efficiency — only 2–10% of LNPs successfully escape the endosome. The rest are degraded in lysosomes. This is an active area of research.
Nature Rev. Drug Discovery — peer-reviewed
Ask about this chapter...
"I used to spend hours opening tabs, evaluating sources, and trying to piece together what the consensus actually was. Gregator does that in minutes and shows me exactly which claims are well-supported and which aren't. It changed how I prepare for deep-dive presentations."
Maria Chen — Senior analyst, biotech research. Early access user since January 2026.

Learning in the age of AI is broken

AI makes it easy to get answers. It doesn't make it easy to understand. Gregator is built to close that gap.

For your understanding,
not your attention

No engagement loops, no algorithmic feeds, no time-on-site metrics. Gregator succeeds when you learn something and leave.

Synthesis, not just summaries

Gregator builds interpretations grounded in sources — and when evidence conflicts or a source has weak backing, it tells you. You see where confidence is high and where it isn't.

Every claim is checkable

Click any citation to see the source, its credibility tier, and what the scoring found. Nothing is hidden behind a "trust me."

Things you'd want to know first

Can it hallucinate? How do I know the claims are real?

Gregator uses AI to synthesize sources, so the usual caveats apply — it can misinterpret or oversimplify. The difference is that every claim has a visible citation you can click to read the original source and see how it was scored. If something looks off, you can check immediately. That's the point.

How fresh are the sources?

Sources are gathered live when you submit a topic. The pipeline queries current web and academic databases. Courses are not pre-built — they are generated on demand. You can also re-run research on any topic to get updated sources.

What happens when sources disagree?

Gregator flags it. You'll see "Mixed evidence" insight cards in the chapter text explaining what the disagreement is, which sources support each side, and what the evidence quality looks like for each. It doesn't paper over conflicts.

How does source scoring actually work?

Each source is categorized by type (peer-reviewed, institutional, journalism, community), then scored across several signals: whether claims are quantitatively backed, whether multiple independent sources corroborate key findings, and whether promotional or marketing language is present. The full breakdown is visible on every source.

How long does a course take to generate?

Typically 2–4 minutes depending on topic complexity and how many sources need to be analyzed. You can watch the pipeline run in real time. Each stage has a progress indicator, and chapters appear as they are completed.

What's the difference between standard and advanced Greg?

Standard Greg uses a capable but smaller language model — good for follow-up questions and basic discussion. Advanced Greg (Pro and above) uses a larger model that gives more nuanced answers, handles longer conversations, and is better at synthesizing information across multiple chapters.

Start free. Upgrade when you need more depth.

All plans include the full research pipeline with source scoring. Higher tiers give you longer courses, better AI models, and tools for ongoing learning.

Free
Try it out
$0 / month
  • 1 topic per month
  • Up to 10 chapters per course
  • Source scoring on all plans
  • Greg chat (standard model)
Get started
Unlimited
No limits on learning
$29 / month
  • Everything in Pro
  • 30 topics per month
  • Faster research pipeline
  • Data export
Start 7-day trial
Try it

Pick a topic you've been meaning to learn

Your first 5 courses are free. No credit card needed.

Start your first course