A little while ago, I wrote a post in which I was trying to figure out a way to improve a Web Crawler script I had written — it was one of those that I never published.
Anyway, for some reason, I wrote it using Stackless Python, but I was pretty novice and didn’t make it as efficient as I could. This was version 1, and it basically went to each page, scraped all the valid links (ie, those that were on the same server and not a mailto: or something) and then went through recursively.
Version 2 was basically the same, just with cleaner code and no recursion. I decided to set up the library so I would extend the SiteCrawler class and get notified of what was going on through callbacks. While it wasn’t any faster than version 1, it did seem a bit more stable.
Version 3 I decided to change drastically and made it multithreaded. It is much, much, much faster. It works like this, there’s an input Queue, and an already checked Queue. There are 4 threads waiting for input on the input Queue, when they get it, they scrape the links, check to see if any of them are in the checked Queue, put what’s been filtered on the input Queue, and put what it just checked on the checked Queue. It seems more complicated than it is. Also the code is more complicated than it needs to be.
Anyway, version 3 works well for me when I need to test a site. It’s saved me so much time in going through and checking for broken URLs. There are a few clients with the number of pages on their site in the 300′s.
But, I recently found out about gevent, and since I have some free time at work, I wanted to play with it a little bit. If you don’t know, gevent is a package that works with the greenlet package on top of libevent. I’m always interested in concurrent programming, and new technologies involved in it. This is why I had installed Stackless at one time.
So now there’s a version 4 of the SiteCrawler script, using — you guessed it — gevent. I haven’t ironed out all of the kinks yet. I was testing a non-pooled version of the script and it basically crawled through 200 links in a matter of seconds — hopefully none of the server admins look at the logs and see 100 simultaneous connections at 3:30 PM today. I did change how the crawl is done quite a bit too. So I’m going to stop there and probably have more tomorrow or in the next few days.