@sarvpriy_arya, we do have proxy rotation built-in. It's definitely not a silver bullet for 100% of cases but works pretty well in the majority of use cases. I'm open to new ideas or suggestions to help with your particular use case if you have any!
Report
Very cool idea, wonder if you infer the model on every request, or once to cache the content selector?
@vzotov Thanks! The model is used on every request, which makes it a bit slower (the average response time for the LLM model is around 1-2 seconds), but it helps to handle a lot of edge cases related to the layout changes, incorrect selectors, etc. It can actually be a nice additional feature to be able to select between selector caching and using the LLM on every request. š
Permar AI
DepsHub
DepsHub
OkFeedback
DepsHub
DepsHub
Crustdata
DepsHub
DepsHub
Kuasar Video AI
DepsHub