One of Slickplan’s new features that we’re most excited about is the site crawler. Armed with nothing but a Google XML file or a webpage URL, the crawler reads the directory of an existing site and automatically feeds it into the Slickplan layout. For developers helping their clients plan updates to existing websites, this can be a major timesaver.
But how does a site crawler actually work?
We developed our own site crawling mechanism (also called a “spider” or a “bot”) just for Slickplan—but it’s not so different from the site indexers used by major search engines. Google, for example, uses crawlers to archive every page on the Internet. They scan text for keywords and take note of how often pages are updated. They use this information to determine a site’s relative rank—and they’re the foundation of the science of SEO.
We don’t need hundreds of crawlers indexing thousands of web pages at once. Instead, we set Slickplan’s crawler loose only upon request. Using an XML file or a URL, the crawler looks at a site’s directory file and uses it to generate a sitemap.
This may not lead to a flawless picture of a site—in fact, it may reveal errors where a site has been organized incorrectly. That’s where you come in: once your import is complete, you can tweak the sitemap to look just the way you want. Then, you can export it as a PDF, HTML, PNG, XML, or vector file. You can even send it straight to the WordPress page format.
Crawling uses serious server bandwidth—that’s part of the reason the feature is available only to our Unlimited users. But we think it’s worth it.
Although Slickplan isn’t specifically an SEO tool, using it to build your sitemap can ensure that when your site goes live, its directory will be clean, and therefore easy to index. Easy indexing means happy crawlers. And happy crawlers mean a better page ranking.
What more could a developer ask for?
Ready to get started with Slickplan? Learn what else our monthly plans have to offer.