All activity
BioMedCore and TrialCore as API endpoints. Returns 20 ranked papers per query, every time. Benchmarked at F1 38.4% vs Claude at 11.0%. Purpose-built for research agents, clinical pipelines, and any application that needs reliable biomedical evidence coverage.
Amass APIBiomedical retrieval for AI agents: 3.5x better than Claude
Emil Cronvalleft a comment
We built Amass to solve a specific problem: general-purpose AI tools miss most of the relevant biomedical literature when you ask them a research question. We tested this. Amass returned 20 papers per question on every run. Claude averaged 7.6. ChatGPT averaged 3.6. F1 measures both coverage and precision — it penalizes tools that miss relevant papers as much as those that return irrelevant...
Amass APIBiomedical retrieval for AI agents: 3.5x better than Claude
