Skip to main content

Best practices for managing rate limits when tracking near real-time leaderboard changes across cycling and running segments

  • April 6, 2026
  • 2 replies
  • 29 views

Forum|alt.badge.img+1

Hey all, looking for some guidance from developers who have solved this problem or from the Strava team directly.

I'm building a segment discovery app (Segment Sniper) that helps athletes find segments with beatable leaderboards for both cycling and running. One of the features I'm working on is alerting users when their KOM, QOM, or CR position is under threat or has been taken.

The challenge is getting leaderboard data frequently enough to detect changes without blowing through the rate limits (200 requests per 15 minutes, 2,000 per day).

Here's what I'm currently thinking:

1. Using webhooks (POST /push_subscriptions) to detect when any of our users uploads a new activity. When a webhook fires, I check which segments overlap with that activity and refresh only those specific leaderboards. This avoids polling entirely for activity detection.

2. For leaderboard monitoring on segments where our users hold positions, I'm batching GET /segments/{id}/leaderboard calls during off-peak hours and caching results in Supabase with a 6 hour TTL. But 6 hours feels too slow for a "your crown is under attack" alert to be useful.

3. For segment discovery (GET /segments/explore), I'm caching results for 24 hours per geographic tile since new segments don't appear that frequently.

My questions for the community:

How are other developers handling the balance between fresh leaderboard data and rate limits? Is there a recommended cache TTL for leaderboard data that Strava considers acceptable under the API Agreement's 7 day cache limit?

Has anyone implemented a priority queue approach where segments with active competition (recent new efforts) get polled more frequently while dormant segments get checked less often? Curious how you're weighting that.

For those building multi-sport apps, are you seeing meaningful differences in leaderboard activity patterns between cycling segments (KOM/QOM) and running segments (CR)? I'm wondering if running segments can be polled less frequently since they tend to have lower attempt volumes.

Would love to hear how others have approached this. Also happy to share what I've learned so far about segment data patterns if it's useful to the community.

Cheers
Brooks

2 replies

Jan_Mantau
Superuser
Forum|alt.badge.img+27
  • Superuser
  • April 6, 2026

If you use /segments/{id}/leaderboard that means you skim the website and not the API, so limits are not of concern. In case you mean /api/v3/segments/{id}/leaderboard then the question would be how you got the permission to use that? I would guess you won’t find many other developers here who have the access rights for this endpoint and if so, those would have usually use high level apps with much enhanced limits. But I could err of course and some others have your combination of the basic limits but with the segment leaderboard access right.


Forum|alt.badge.img+1

Thanks Jan, really appreciate the insight. You were spot on.

I've just run a full endpoint audit and confirmed that /api/v3/segments/{id}/leaderboard returns 403 on my application with standard API access. So that answers that.

What I do have access to is /api/v3/segments/{id} which returns the xoms data (KOM/QOM/CR times), athlete_count, effort_count, star_count, and the authenticated user's own athlete_segment_stats including their PR. I also have /api/v3/segment_efforts which gives me the user's full effort history with elapsed times and ranks.

So the KOM time and the user's own PR come through fine. What I'm missing is the granular top 10 times that would let me show things like "you need 12 seconds to move from #7 to #4."

I've reworked the scoring algorithm to run without the leaderboard data. It now weights five factors using what's available: gap between the user's PR and the KOM/QOM/CR time (40%), athlete count as a proxy for competition density (20%), effort-to-athlete ratio as a measure of how actively contested the segment is (15%), star count for popularity (10%), and the user's own improvement trajectory on similar segments (15%).

It's less precise than having the full top 10 breakdown but it still surfaces genuinely beatable segments effectively. Low athlete count segments with stale KOM times and a user whose PR is within 10% score high, which is correct behaviour.

If anyone here has gone through that process and has tips on what Strava looks for in those requests, would appreciate the guidance.

Thanks again for steering me in the right direction.

Cheers
Brooks