Spider Node.js Reference Documentation
Spider
Current Version: 10.1.2
Chilkat Spider web crawler object.
Object Creation
var obj = new chilkat.Spider();
Properties
AbortCurrent
· boolean
When set to true, causes the currently running method to abort. Methods that always finish quickly (i.e.have no length file operations or network communications) are not affected. If no method is running, then this property is automatically reset to false when the next method is called. When the abort occurs, this property is reset to false. Both synchronous and asynchronous method calls can be aborted. (A synchronous method call could be aborted by setting this property from a separate thread.)
topAvoidHttps
· boolean
If set the 1 (true) the spider will avoid all HTTPS URLs. The default is 0 (false).
topCacheDir
· string
Specifies a cache directory to use for spidering. If either of the FetchFromCache or UpdateCache properties are true, this is the location of the cache to be used. Note: the Internet Explorer, Netscape, and FireFox caches are completely separate from the Chilkat Spider cache directory. You should specify a new and empty directory.
ChopAtQuery
· boolean
If equal to 1 (true), then the query portion of all URLs are automatically removed when adding to the unspidered list. The default value is 0 (false).
topConnectTimeout
· integer
The maximum number of seconds to wait while connecting to an HTTP server.
topDebugLogFilePath
· string
If set to a file path, this property logs the LastErrorText of each Chilkat method or property call to the specified file. This logging helps identify the context and history of Chilkat calls leading up to any crash or hang, aiding in debugging.
Enabling the VerboseLogging property provides more detailed information. This property is mainly used for debugging rare instances where a Chilkat method call causes a hang or crash, which should generally not happen.
Possible causes of hangs include:
- A timeout property set to 0, indicating an infinite timeout.
- A hang occurring within an event callback in the application code.
- An internal bug in the Chilkat code causing the hang.
Domain
· string, read-only
The domain name that is being spidered. This is the domain previously set in the Initialize method.
topFetchFromCache
· boolean
If equal to 1 (true) then pages are fetched from cache when possible. If 0, the cache is ignored. The default value is 1. Regardless, if no CacheDir is set then the cache is not used.
FinalRedirectUrl
· string, read-only
If the last URL crawled was redirected (as indicated by the WasRedirected property), this property will contain the final redirect URL.
topLastErrorHtml
· string, read-only
Provides HTML-formatted information about the last called method or property. If a method call fails or behaves unexpectedly, check this property for details. Note that information is available regardless of the method call's success.
topLastErrorText
· string, read-only
Provides plain text information about the last called method or property. If a method call fails or behaves unexpectedly, check this property for details. Note that information is available regardless of the method call's success.
LastErrorXml
· string, read-only
Provides XML-formatted information about the last called method or property. If a method call fails or behaves unexpectedly, check this property for details. Note that information is available regardless of the method call's success.
topLastFromCache
· boolean, read-only
Equal to 1 if the last page spidered was fetched from the cache. Otherwise equal to 0.
topLastHtml
· string, read-only
The HTML text of the last paged fetched by the spider.
topLastHtmlDescription
· string, read-only
The HTML META description from the last page fetched by the spider.
LastHtmlKeywords
· string, read-only
LastHtmlTitle
· string, read-only
LastMethodSuccess
· boolean
Indicates the success or failure of the most recent method call: true means success, false means failure. This property remains unchanged by property setters or getters. This method is present to address challenges in checking for null or Nothing returns in certain programming languages.
topLastModDateStr
· string, read-only
The last modification date/time from the last page fetched by the spider.
topLastUrl
· string, read-only
The URL of the last page spidered.
topMaxResponseSize
· integer
The maximum HTTP response size allowed. The spider will automatically fail any pages larger than this size. The default value is 250,000 bytes.
MaxUrlLen
· integer
The maximum URL length allowed. URLs longer than this are not added to the unspidered list. The default value is 200.
NumAvoidPatterns
· integer, read-only
The number of avoid patterns previously set by calling AddAvoidPattern.
topNumFailed
· integer, read-only
The number of URLs in the object's failed URL list.
topNumOutboundLinks
· integer, read-only
The number of URLs in the object's outbound links URL list.
topNumSpidered
· integer, read-only
The number of URLs in the object's already-spidered URL list.
topNumUnspidered
· integer, read-only
PreferIpv6
· boolean
If true, then use IPv6 over IPv4 when both are supported for a particular domain. The default value of this property is false, which will choose IPv4 over IPv6.
topProxyDomain
· string
The domain name of a proxy host if an HTTP proxy is used.
topProxyLogin
· string
If an HTTP proxy is used and it requires authentication, this property specifies the HTTP proxy login.
topProxyPassword
· string
If an HTTP proxy is used and it requires authentication, this property specifies the HTTP proxy password.
topProxyPort
· integer
The port number of a proxy server if an HTTP proxy is used.
topReadTimeout
· integer
The maximum number of seconds to wait when reading from an HTTP server.
topUpdateCache
· boolean
If equal to 1 (true) then pages saved to the cache. If 0, the cache is ignored. The default value is 1. Regardless, if no CacheDir is set then the cache is not used.
UserAgent
· string
The value of the HTTP user-agent header field to be sent with HTTP requests. This can be set to any desired value, but be aware that some websites may reject requests from unknown user agents.
topVerboseLogging
· boolean
If set to true, then the contents of LastErrorText (or LastErrorXml, or LastErrorHtml) may contain more verbose information. The default value is false. Verbose logging should only be used for debugging. The potentially large quantity of logged information may adversely affect peformance.
topVersion
· string, read-only
WasRedirected
· boolean, read-only
Indicates whether the last URL crawled was redirected. (true = yes, false = no)
topWindDownCount
· integer
The "wind-down" phase begins when this number of URLs has been spidered. When in the wind-down phase, no new URLs are added to the unspidered list. The default value is 0 which means that there is NO wind-down phase.
topMethods
AddAvoidOutboundLinkPattern
· Does not return anything (returns Undefined).
· pattern String
Adds a wildcarded pattern to prevent collecting matching outbound link URLs. For example, if "*google*" is added, then any outbound links containing the word "google" will be ignored. The "*" character matches zero or more of any character.
AddAvoidPattern
· Does not return anything (returns Undefined).
· pattern String
Adds a wildcarded pattern to prevent spidering matching URLs. For example, if "*register*" is added, then any url containing the word "register" is not spidered. The "*" character matches zero or more of any character.
AddMustMatchPattern
· Does not return anything (returns Undefined).
· pattern String
Adds a wildcarded pattern to limit spidering to only URLs that match the pattern. For example, if "*/products/*" is added, then only URLs containing "/products/" are spidered. This is helpful for only spidering a portion of a website. The "*" character matches zero or more of any character.
AddUnspidered
· Does not return anything (returns Undefined).
· url String
To begin spidering you must call this method one or more times to provide starting points. It adds a single URL to the object's internal queue of URLs to be spidered.
CanonicalizeUrl
· Returns a String.
· url String
Canonicalizes a URL by doing the following:
- Drops username/password if present.
- Drops fragment if present.
- Converts domain to lowercase.
- Removes port 80 or 443
- Remove default.asp, index.html, index.htm, default.html, index.htm, default.htm, index.php, index.asp, default.php, .cfm, .aspx, ,php3, .pl, .cgi, .txt, .shtml, .phtml
- Remove www. from the domain if present.
Returns null on failure
ClearFailedUrls
· Does not return anything (returns Undefined).
Clears the object's internal list of URLs that could not be downloaded.
topClearOutboundLinks
· Does not return anything (returns Undefined).
Clears the object's internal list of outbound URLs that will automatically accumulate while spidering.
topClearSpideredUrls
· Does not return anything (returns Undefined).
Clears the object's internal list of already-spidered URLs that will automatically accumulate while spidering.
topCrawlNext
· Returns a Boolean.
Crawls the next URL in the internal list of unspidered URLs. The URL is moved from the unspidered list to the spidered list. Any new links within the same domain and not yet spidered are added to the unspidered list. (providing that they do not match "avoid" patterns, etc.) Any new outbound links are added to the outbound URL list. If successful, the HTML of the downloaded page is available in the LastHtml property. If there are no more URLs left unspidered, the method returns false. Information about the URL crawled is available in the properties LastUrl, LastFromCache, and LastModDate.
CrawlNextAsync (1)
· Returns a Task
Creates an asynchronous task to call the CrawlNext method with the arguments provided.
Returns null on failure
topFetchRobotsText
· Returns a String.
Returns the contents of the robots.txt file from the domain being crawled. This spider object will not crawl URLs excluded by robots.txt. If you believe the spider is not behaving correctly, please notify us at [email protected] and provide information detailing a case that allows us to reproduce the problem.
Returns null on failure
FetchRobotsTextAsync (1)
· Returns a Task
Creates an asynchronous task to call the FetchRobotsText method with the arguments provided.
Returns null on failure
topGetAvoidPattern
· Returns a String.
· index Number
Returns the Nth avoid pattern previously added by calling AddAvoidPattern. Indexing begins at 0.
Returns null on failure
topGetBaseDomain
· Returns a String.
· domain String
Returns the second-level + top-level domain of the domain. For example, if domain is "xyz.example.com", this returns "example.com". For some domains, such as "xyz.example.co.uk", the top 3 levels are returned, such as "example.co.uk".
Returns null on failure
GetFailedUrl
· Returns a String.
· index Number
GetOutboundLink
· Returns a String.
· index Number
GetSpideredUrl
· Returns a String.
· index Number
Returns the Nth URL in the already-spidered URL list. Indexing begins at 0.
Returns null on failure
topGetUnspideredUrl
· Returns a String.
· index Number
GetUrlDomain
· Returns a String.
· url String
Returns the domain name part of a URL. For example, if the URL is "https://www.chilkatsoft.com/test.asp", then "www.chilkatsoft.com" is returned.
Returns null on failure
topInitialize
· Does not return anything (returns Undefined).
· domain String
Initializes the object to begin spidering a domain. Calling Initialize clears any patterns added via the AddAvoidOutboundLinkPattern, AddAvoidPattern, and AddMustMatchPattern methods. The domain name passed to this method is what is returned by the Domain property. The spider only crawls URLs within the same domain.
LoadTaskCaller
· Returns Boolean (true for success, false for failure).
· task Task
RecrawlLast
· Returns a Boolean.
Re-crawls the last URL spidered. This helpful when cookies set in a previous page load cause the page to be loaded differently the next time.
topRecrawlLastAsync (1)
· Returns a Task
Creates an asynchronous task to call the RecrawlLast method with the arguments provided.
Returns null on failure
topSkipUnspidered
· Does not return anything (returns Undefined).
· index Number
Moves a URL from the unspidered list to the spidered list. This allows an application to skip a specific URL.
topSleepMs
· Does not return anything (returns Undefined).
· numMilliseconds Number
Suspends the execution of the current thread until the time-out interval elapses.
top