Skip to main content

I noticed this in your latest announcement about updates to the TOS/privacy policy:

  • Al features. Like so many of you, we’re excited about AI, and we’re identifying ways it can enhance your product experience. We explain how we use data in our AI features.

(Sidenote: this is just terrible messaging, not everyone is “excited about AI,” and it’s basically a triggering word for a lot of people, who will write off your entire product because they have come to associate “AI” with “en**bleep**tification.”)

The only explanation of using data in AI features I could find was in the privacy policy (identical language appears in Legal Bases):

  • Provide AI Features. For example, we use machine learning or artificial intelligence, including large language models, to detect anomalies on leaderboards, generate route recommendations, or provide personalized training guidance.

This is…not a lot of explanation. At a minimum I’d like to know:

  • Which of my data contribute to which features. For example, do my route data contribute to route recommendations?
  • Which features I see may be generated artificially. For example, which routes are created artificially and which are created by humans?

More specifically, I’d like to be able to opt in or out of AI-assisted features (i.e. prohibit the use of my data in training a model.) For example, I might want to forego appearing on leaderboards if it means my data aren’t used to train anomaly detection.

I mostly agree with @axoplasm. I am not sure why such an odd phrasing is necessary ("Like so many of you, we’re excited about AI" - are we? are we really? Also, see.. I am triggered) but I can phrase my position on that topic very bluntly: it's either opt-in or walk-out.

I would however not want a lesser experience of what I have without AI features. Neither a forced/default/automatic opt-in nor an "opt-in or else" attitude would come across as a good-faith decision space.

Oh and just to be clear, by opt-in I mean exactly that - off by default. Just like LinkedIn's "mishap" this week showed, companies that add and enable AI syphoning and -data use by default are not generally welcomed with joy and excitement.

If your PMs however lean on the opt-out side of things, you and "we" both know why that would be the case and would just show that maybe.. just maybe.. that "excited about AI" ain't really the case.

Let the excited ones opt-in & the rest of us just have a good ride/run/...


I for one am not excited about having my data lmined and exploited so I can get goofy advice I don’t want from a network of computers using tremendous amounts of energy. I’m sure that Strava is hoping to cash in on the “AI” craze, but not allowing a method to opt out sucks.

i just started using Strava in the last year and have had a lot of fun connecting with friends in the local cycling community, but as a person who bikes in part for environmental reasons, lighting the rainforest on fire to get advice and suggestions of dubious value makes me want to quit the service.


in the EU and UK even LinkedIn have found that opting users in to using their data for generative AI is not ok. https://www.bbc.co.uk/news/articles/cy89x4y1pmgo
If an opt out doesn’t appear, then I guess the ICO needs to get involved.


This is very concerning to me and is making me consider if i need to mass delete the past 12 years of data from strava before the 30th. Making it default opt-in to things like this is a terrible take, especially when those AI features/uses are not explained or there is no opt-out. I am now questioning Strava's entire stance on privacy.  Sincerely, a ML engineer (which is just to say, I understand the benefits of AI, but you're missing the mark here).


Does anybody know where to report this to the EU commission?


I think this would be through the body in your country. https://digital-strategy.ec.europa.eu/en/policies/dsa-dscs


I believe that training the AI with flagged and unflagged activities to learn when an activity has the wrong activity type or had motor assistence will be highly beneficial for all the users that care about the integrity of leaderboards. If only activities and data streams visible to everyone are used (I don't know that) then no privacy has been breached. But I recognize that using AI in a psychological sense goes further then the already happening processing of data and therefore it would be wise to enact user options for that. I'm only concerned that an opt-in makes the whole AI thing useless because we know from the fly-by feature that so many people don't even know there is an opt-in for that. Maybe it could be promoted a bit more than just some checkbox in the settings.


Opting users into AI raises several privacy and ethical concerns. Here are a few specific examples:

1. Re-identification of Anonymized Data

  • Issue: Even if data is anonymized, advanced AI algorithms can re-identify individuals by cross-referencing data points such as location, times, and activity patterns.
  • Harm: This could reveal a user's home, workplace, or frequent running routes, making them vulnerable to stalking or harassment.

2. Location-Based Profiling

  • Issue: AI could analyze users' movement patterns to build profiles of where they live, work, or exercise.
  • Harm: This information could be misused by third parties, such as advertisers, insurance companies, or even criminals, to exploit or target individuals.

3. Predictive Behavior Monitoring

  • Issue: AI trained on users' fitness and location data could predict future activities, including their whereabouts at specific times of the day.
  • Harm: This could be used by malicious actors to plan targeted crimes like burglary when users are away from home on routine activities.

4. Discrimination Based on Health or Fitness Data

  • Issue: AI could use Strava data to infer health conditions, fitness levels, or lifestyle habits.
  • Harm: This could lead to discriminatory practices, such as denying health or life insurance, or employers using inferred data in hiring or firing decisions.

5. Uncontrolled Data Sharing with Partners

  • Issue: AI companies that partner with Strava may share or sell user data to other entities.
  • Harm: This can result in users’ data being aggregated across platforms, deepening the risk of loss of privacy and unwanted profiling across services they never opted into.

6. Unintended Psychological Impact

  • Issue: AI could analyze and predict emotional states or psychological patterns based on exercise routines and social interactions on the platform.
  • Harm: This could lead to manipulative targeting in advertising or recommendations, such as promoting unhealthy behavior or exploiting emotional vulnerabilities.

7. Surveillance and Tracking

  • Issue: Strava’s data could be accessed by law enforcement or government agencies to track individuals' movements and activities.
  • Harm: In authoritarian regimes, such data could be used to monitor dissent or movement patterns, leading to unwarranted arrests or surveillance.

8. Misuse by Third-Party AI Developers

  • Issue: Strava’s vague policy leaves room for data to be used by various AI developers for purposes unknown to the user.
  • Harm: Data could be used in unexpected ways, such as training AI for surveillance tech, predictive policing, or advertising manipulation, all without user consent.

9. Exploitation of Competitive or Sensitive Data

  • Issue: Professional athletes or competitors could have their performance data exposed and used by rivals or other organizations without their knowledge.
  • Harm: This could result in unfair competitive advantages, misuse of personal performance strategies, or even sabotage.

10. Lack of Transparency on Future AI Uses

  • Issue: AI capabilities evolve rapidly, and Strava’s policy doesn’t specify what kind of AI development the data will support.
  • Harm: Strava could later use the data for controversial or harmful AI applications, like behavior prediction, or real-time surveillance, with no oversight from users.

I realize that many of these may seem like a stretch, but given the vagueness of the privacy policy update, I have to assume the worst case scenario that Strava will not only use user data to train AI models, but they will share or monetize this data in the future. Worse yet is if they don't develop the AI in-house and a 3rd party company and have much less control over how that data will be used or shared in the future. And that is a concern because the privacy policy specifically has a "Categories of third parties that we may share your information with are listed here ." which leads to a ":todo" meaning this is an area they plan to update.

In short, Strava can't pretend to care at all about privacy with auto-opting users in to this. I eagerly await a response from Strava.


This will most likely go viral on LinkedIn, other social sites, once the legal and tech industry practitioners discover, similar to how other mass data ingesting platforms have suffered in recent months.

It would be great if Strava gets ahead of it and addresses with the level and granularity of transparency in a manner that fosters customer trust.  The AI feature release could be pushed back on the roadmap in order to develop the compliance mechanisms needed. 


Hello all,


Thank you for your patience as we worked towards replying to this request. We are happy to share more about how you can opt out of the use of your data for AI features.


Athlete Intelligence, a beta feature that provides training analysis and insights, is currently our only generative AI-powered feature. Athlete Intelligence is currently only available to a limited set of private Beta testers but we hope to open it more widely to athletes with a Strava subscription in the near future. Users can opt-out at any time (by clicking the “Leave beta” button within the feedback module). Athlete Intelligence insights and summaries are only visible to the owner of the Strava account and are not shared more widely or used to train public AI models.


Strava also leverages non-LLM AI and machine learning to improve certain backend features, such as the detection of anomalies in leaderboard entries and route recommendations. We do not use large language models for these features. We develop and train models internally and do not use or share the activity data with third parties.


If you do not wish to contribute to route recommendations, you can disable the Aggregated Data Usage privacy control or make your activities visible to “Followers” or “Only You.” At this time, only public activities that use the “Everyone" privacy setting are included in leaderboard anomaly detection training.  


We appreciate your feedback and hope this information helps address your concerns.


Sadly, this answer does not match what your terms and conditions are making agree to. If the terms and conditions matched this answer, I would be happy with that. 
 
However your terms and condition say you can use my data to:
  • Provide AI Features. For example, we use machine learning or artificial intelligence, including large language models, to detect anomalies on leaderboards, generate route recommendations, or provide personalized training guidance.
 
‘for example’ is simply not good enough. I feel Strava needs to update the terms and conditions again to make it a clear you do not use our data to provide AI features without consent (based on selecting the options you mention) 

Regardless of the explanation if anyone wants to use Strava they have to agree to terms and conditions. Since terms and conditions do not segregate users based on who opted out of what features, it would be honest and fair to make two sets of T&C, for those who want to "opt-in" to AI features one way or the other and those who do not want to. At this point, the only way to opt out is not to agree to T&C, which renders Strava a useless service and app on my phone. So why should anyone who does not want to be thrown into AI h-e-l-l- keep using Strava?


I have a problem accepting all the changes before I can access my own data.

How do I download my data so I can then delete the app?

I also need a refund of the rest of my billing term.

Thanks Richard 


My account has the “athlete intelligence” feature. I don’t care for it and have zero interest in the feature. Starva is my one social app that isn’t a hot dumpster fire of content and now they are throwing this junk into the mix. Keep it simple Strava. 


In the app you can leave the beta when you use the the “Say more...” button and then the feedback button.


my 2¢: I know that storing my stuff in the cloud does not help my carbon footprint, but to have the app tell me my average speed was average, that I cycled longer than usual etc. based on AI “analysis” of the ride I just did right below the mention that I saved x amount of CO2 emission, feels weird, also because / and it adds nothing to anything imho.

To me the feature is useless and costs only, but yeah, some might like to read such a text.


Athlete Inteligence is an absolute farce that nobody asked for.

 

Strava needs to recognise that jumping on the AI hype train just as it’s derailing is a very bad idea.

 

 

 

 


Opting users into AI raises several privacy and ethical concerns. Here are a few specific examples:

1. Re-identification of Anonymized Data

  • Issue: Even if data is anonymized, advanced AI algorithms can re-identify individuals by cross-referencing data points such as location, times, and activity patterns.
  • Harm: This could reveal a user's home, workplace, or frequent running routes, making them vulnerable to stalking or harassment.

tldr;

Did you literally used AI to generate this spiel? Hypocritical, a little bit? 

 

If you are not comfortable having new features thrust upon you, perhaps signing up to beta testing is not something you should be doing. 

 

 

 


@petrfaitl This thread is not about the AI beta in the Strava app


@Jan_Mantau considering that Athlete Inteligence is only available in Strava Beta and so many voices mentioned they don't want to be part of it, then my post covers that point.

 

Lola specifically said "Athlete Intelligence is currently only available to a limited set of private Beta testers". 

 

She also explained that they do not use LLM and yet there is so much knee jerk reaction to it. 

 

Surely, it is easier to suggest that Athlete Inteligence should be promoted but you should have the option to disable it upon on boarding of the feature. 

 

Athlete Intelligence is actually not very wide reaching in what it provides. And as I can see it, it is looking at your own data not aggregated data from other users. Frankly, in it's current form it's no more than prose to your data points. 


@Jan_Mantau considering that Athlete Inteligence is only available in Strava Beta and so many voices mentioned they don't want to be part of it, then my post covers that point.

 

Lola specifically said "Athlete Intelligence is currently only available to a limited set of private Beta testers". 

 

She also explained that they do not use LLM and yet there is so much knee jerk reaction to it. 

 

Surely, it is easier to suggest that Athlete Inteligence should be promoted but you should have the option to disable it upon on boarding of the feature. 

 

Athlete Intelligence is actually not very wide reaching in what it provides. And as I can see it, it is looking at your own data not aggregated data from other users. Frankly, in it's current form it's no more than prose to your data points. 

As somebody else mentioned, her explanations do not match the contents of the new TOS.


I have a problem accepting all the changes before I can access my own data.

How do I download my data so I can then delete the app?

I also need a refund of the rest of my billing term.

Thanks Richard 

You can use this link to download your data (and later delete it) without accepting the new TOS: https://www.strava.com/athlete/delete_your_account


Reply