One thing I kept running into while building CITAQ: most teams working on AI shopping agents are either scraping retailer pages, trusting merchant-provided feeds, or pulling from aggregators like Google Shopping.
All three have the same problem the data is unverified at the claim level. An agent retrieving "waterproof to 50m" has no way to know if that's a tested specification or a line someone typed into a product description.
Curious how others in the space are thinking about this. Are you ignoring it for now and shipping anyway? Building internal verification logic? Relying on return rates as a signal of bad data?
CITAQ is our answer to this but I'm genuinely interested in how the problem looks from where you're building.