StormCrawler 1.4 has just been released! As usual, all users are advised to upgrade to this version as it fixes some bugs and contains quite a few new functionalities.
Core dependencies upgrades
- Httpclient 4.5.3
- Storm 1.0.3 #437
- JSoupParser does not dedup outlinks properly, #375
- Custom schedule based on metadata for non-success pages, #386
- Adaptive fetch scheduler #407
- Sitemap: increased default offset for guessing + made it configurable #409
- Added URLFilterBolt + use it in ESSeedInjector #421
- URLStreamGrouping 425
- Better handling of redirections for HTTP robots #4372d16
- HTTP Proxy over Basic Authentication #432
- Improved metrics for status updater cache (hits and misses) #434
- File protocol implementation #436
- Added CollectionMetrics (used in ES MetricsConsumer + ES Spout, see below) #7d35acb
- Added code for caching and retrieving content from AWS S3 #e16b66ef
- Basic upgrade to Solr 6.4.1
- Use ConcurrentUpdateSolrClient; #183
- Various changes to StatusUpdaterBolt
- Simplified README + removed bigjar profile from pom #414
- Provide basic mapping for doc index #433
- Simple Grafana dashboard for SC metrics, #380
- Generate metrics about status counts, #389
- Spouts report time taken by queries using CollectionMetric, #439 - as illustrated below
|Spout query times displayed by Grafana
(illustrating the impact of SamplerAggregationSpout on a large status index )
As usual, it is not clear what the next release will contain but hopefully, we'll switch to Elasticsearch 5 (you can already take it from the branch es5.3) and provide resources for Selenium (see branch jBrowserDriver). As I pointed out in my previous post, getting early feedback on work in progress is a great way of contributing to the project.
We'll probably also upgrade to the next release of crawler-commons, which will have a brand new SAX-based Sitemap parser. We might move to one of the next releases of Apache Storm, where a recent contribution I made will make it possible to use Elasticsearch 5. Also, some of our StormCrawler code has been donated to Storm, which is great!
In the meantime and as usual, thanks to all contributors and users and happy crawling!
PS: I will be running a workshop in Berlin next month about StormCrawler, Storm in general and Elasticsearch