This is second-hand, so take it with a grain of salt, but I’ve seen mention of a bug that sometimes causes the same graphql query to be executed in an infinite loop (presumably they’re async requests, so the browser wouldn’t lock and the user wouldn’t even notice).
So they may essentially be getting DDOS’d by their own users due to a bug on their end.
Or it could be. It’s no coincidence that scraping went way up when he started charging for the API.
Everyone with a brain knows that data will be retrieved somehow, it’s do you want a lower cost API option or do you want them to scrape the whole webpage?
Oh yeah I completely forgot about that particular idiocy, Elmo gets up to so much stupid shit that it’s hard to keep track.
But I’d also be willing to bet money on this being somehow at least partially tied to ditching GC, likely due to not being able to pay (at that’s what is implied by them refusing to pay the bill.) I guess Elmo thought “how hard can running some servers be? I’m a rokit skientist” and decided to just skip paying the bill as a power move instead of trying to make a deal with Google, and now the remaining developers, ops people etc. – those poor bastards – are paying the price.
That’s my bet too. They weren’t hosting the site itself on GCP but they were using them for trust and safety services, and I bet that one of those services was anti scraping prevention with things like ip blocking and captchas, which would explain why scraping suddenly became a problem for them the day their contract ended. It can’t be a coincidence.
I wonder what’s actually going on; I doubt it’s about “scraping” and “manipulation”
This is second-hand, so take it with a grain of salt, but I’ve seen mention of a bug that sometimes causes the same graphql query to be executed in an infinite loop (presumably they’re async requests, so the browser wouldn’t lock and the user wouldn’t even notice).
So they may essentially be getting DDOS’d by their own users due to a bug on their end.
Edit: better info: https://sfba.social/@sysop408/110639435788921057
Ha, that’s hilarious. Absolutely not a surprise, though
I bet it’s just more short sighted penny pinching, he’s been skipping out on bills left and right.
Or it could be. It’s no coincidence that scraping went way up when he started charging for the API.
Everyone with a brain knows that data will be retrieved somehow, it’s do you want a lower cost API option or do you want them to scrape the whole webpage?
I’m suspect some of the backend is starting to fail. So the servers can’t keep up with the demand.
For my money I would bet the issue stems from abandoning Google server hosting, either from arrogance or being unable to afford it.
Oh yeah I completely forgot about that particular idiocy, Elmo gets up to so much stupid shit that it’s hard to keep track.
But I’d also be willing to bet money on this being somehow at least partially tied to ditching GC, likely due to not being able to pay (at that’s what is implied by them refusing to pay the bill.) I guess Elmo thought “how hard can running some servers be? I’m a rokit skientist” and decided to just skip paying the bill as a power move instead of trying to make a deal with Google, and now the remaining developers, ops people etc. – those poor bastards – are paying the price.
That’s my bet too. They weren’t hosting the site itself on GCP but they were using them for trust and safety services, and I bet that one of those services was anti scraping prevention with things like ip blocking and captchas, which would explain why scraping suddenly became a problem for them the day their contract ended. It can’t be a coincidence.