Featured

Apifier

Web scraper that works on every website

Featured comment

Jan Čurn@jancurn · Co-founder, Apifier
@passingnotes Hey David, thanks! We're considering using a similar model as GitHub - all free accounts have to make their crawlers public and only paid accounts can have them private. Do you think people would be okay with that?
David Carpe@passingnotes · Thinker & Layabout
@jancurn I think people creating certain crawlers will be fine sharing, but may want anonymity for community sharing
DiscussionYou need to become a Contributor to join the discussion - Find out how.
Jan Čurn
Maker
@jancurn · Co-founder, Apifier
@passingnotes Hey David, thanks! We're considering using a similar model as GitHub - all free accounts have to make their crawlers public and only paid accounts can have them private. Do you think people would be okay with that?
David Carpe@passingnotes · Thinker & Layabout
@jancurn I think people creating certain crawlers will be fine sharing, but may want anonymity for community sharing
David Carpe@passingnotes · Thinker & Layabout
nice! would love to see a community driven collection of custom crawlers (to simply recycle or emulate)
Kat Manalac@katmanalac · Partner, Y Combinator
Apifier is a web scraper that extracts structured data from any website using a few simple lines of JavaScript. For example, imagine you found a website selling shoes and want to get a spreadsheet with all the shoe sizes, colors, prices etc. You could create such a spreadsheet manually using copy and paste, but that would take you a lot of time and frustration. Or you could setup Apifier to do this for you in a few seconds. Apifier is a startup to launch from the YC Fellowship.
Jan Čurn
Maker
@jancurn · Co-founder, Apifier
Hi Hunters, we’re Jan and Jakub, the makers of Apifier. About a year ago we looked for a web scrapper for one of our consulting projects but none of the existing ones actually worked for websites that we needed. So we decided to build a new one. Unlike the point-and-click web scrapers, Apifier doesn't run into troubles with complicated or dynamic websites and you don't need to work around a new user interface when you start using it - you define your scraper using the same JavaScript code that you already use for your front-end web development. We’re looking forward to hear what you think! We’ll be around to answer any questions.
Greg Gerber@gerbz · Making my dent
@jancurn hey guys! I love scrapers =} but the bigger issues of scraping "the websites we need" are around ips/proxies what are you doing on that front?
Jan Čurn
Maker
@jancurn · Co-founder, Apifier
@gerbz, yeah absolutely, we're actually rotating a number of proxies. We can also arrange for people to use their own list of proxies if they want.