Does anyone have a good article/explanation on how rank trackers work? I’ve heard recently that Google doesn’t like these companies running millions of queries/scraping that data because it slows down G’s systems.
What would Google do to block such kinds of activities?
Yes, Google is pretty strict with blocking when you crawl too aggressively. The primary method they use to block is to put up captchas or to put up a robots detection page. To play nicely with Google, you need to put a decent rest time between queries on the same IP (we’re resting around 4 to 6 minutes) and put progressively increasing rest times on IPs that get blocked. Also, don’t mess up the URLs. Unnatural URLs are going to be more susceptible to getting blocked.