Group: ipernity API development


doc.search limit


Roberto Ballerini - traveling
18 Oct 2008 - 4 comments - 1 088 visits- Permalink   |   Translate title into English

I don't understand this 3000 items limit.
If what you want is a limit to the quantity of calls, a 'per hour' calls limit as in Flickr seems to me more effective.
As it is implemented now, I can always play with the parameters and circumvent it: the result is that the code is more difficult to write, but the server load only slightly decreases...
Comments
 Dirk
Dirk
Problem with a limit: How to test your application? Of course, good programmers build a caching system, but until this point you need to do a lot of calls to get data to “play” with while coding your application.

--
Coming from a group home page (?)
15 years ago.
Roberto Ballerini -… has replied to Dirk
Caching can solve some problems for sure. The PHP API kit I used to play with Flickr API had caching already integrated and you had to have a MySQL db to use it.
But when you intrinsically haver to do with great quantities of data, caching doesn't help (combined with the 100 items per page limit which forces you to do more roundtrips to obtain the same quantity of data). Making analysis on a single day worth of shots, I already seen the data changing while I was looping... Perhaps the solution is in caching/proxying on the server instead of on the client side of the XML-RPC interaction.

--
Coming from a group home page (?)
15 years ago.
 Christophe Ruelle
Christophe Ruelle club
I think there is a misunderstanding with the 3000 items limit. We are limiting the search.doc responses pages to the first 3000 elements. Ex: if a search returns 200 000 results and you have per_page=10 you will be able to browse 300 pages max.
The API calls limits are much higher (100K request/day or so).
15 years ago.
Roberto Ballerini -… has replied to Christophe Ruelle club
There isn't a misunderstanding, Christophe. I understood it well and I'd like to know the reason, if it isn't to reduce the server load.
Some example:
- a month worth of public uploads: 150.000 shots
- Ojisanjake stream: 12000+ shots
- bigger groups: 10000+ shots
These are three situations where we'd have to do an extra external loop, changing some parameters, to go through it.
If you want to make a bot to help admins to manage group pools, if you want to sum the total visits on Jake's stream, if you want to find the more viewed shots of September, ... all examples of situations where this 3000 items limit add complexity to code.
So if the reason for the limit is to reduce server load, well, I can understand it, but I think it isn't effective: I can code around it: it's better to introduce a limit of calls per hour to force us to streamline the code (but possibly this testing phase isn't the right moment for it...).
You have to find a balance between server costs and quality of service for developers: I think that the quantity of Flickr mashups available is one of the greatest reasons for their success --> give the developers the possibility to grow your success ;-)
--
Coming from a group home page (?)
15 years ago.

You must be a member of this group to reply to this topic. (Join?)