crawler:scrapy
Differences
This shows you the differences between two versions of the page.
crawler:scrapy [2016/07/20 15:08] – [Item Pipiline] admin | crawler:scrapy [2022/10/29 16:15] (current) – external edit 127.0.0.1 | ||
---|---|---|---|
Line 22: | Line 22: | ||
* **Item Pipeline**: Post-process and store your scraped data. | * **Item Pipeline**: Post-process and store your scraped data. | ||
* **Feed exports**: Output your scraped data using different formats and storages. | * **Feed exports**: Output your scraped data using different formats and storages. | ||
- | * Link Extractors: Convenient classes to extract links to follow from pages. | + | |
==== Data Flow ==== | ==== Data Flow ==== | ||
{{: | {{: | ||
Line 149: | Line 149: | ||
SET VS90COMNTOOLS=%VS100COMNTOOLS% | SET VS90COMNTOOLS=%VS100COMNTOOLS% | ||
</ | </ | ||
+ | * upgrade setuptools:< | ||
+ | pip install -U setuptools | ||
+ | </ | ||
=== Install pyopenssl === | === Install pyopenssl === | ||
Step by steop install openssl: | Step by steop install openssl: | ||
Line 441: | Line 444: | ||
- **Thuật toán đệ quy** để tìm tất cả url liên kết với url khởi tạo và tạo thành mạng lưới url liên kết với nó | - **Thuật toán đệ quy** để tìm tất cả url liên kết với url khởi tạo và tạo thành mạng lưới url liên kết với nó | ||
- **Thuật toán extract links dựa theo rule** để lọc ra những url mà nó muốn download | - **Thuật toán extract links dựa theo rule** để lọc ra những url mà nó muốn download | ||
- | === link-extractors | + | ==== Scrapy linkextractors package ==== |
refer: http:// | refer: http:// | ||
Link extractors are objects whose only purpose is to extract links from web pages. The only public method that every link extractor has is **extract_links**, | Link extractors are objects whose only purpose is to extract links from web pages. The only public method that every link extractor has is **extract_links**, | ||
+ | === linkextractors classes === | ||
Link extractors classes bundled with Scrapy are provided in the **scrapy.linkextractors** module. Some basic classes in **scrapy.linkextractors** used to extract links: | Link extractors classes bundled with Scrapy are provided in the **scrapy.linkextractors** module. Some basic classes in **scrapy.linkextractors** used to extract links: | ||
* **scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor**. Because alias below< | * **scrapy.linkextractors.lxmlhtml.LxmlLinkExtractor**. Because alias below< | ||
Line 462: | Line 465: | ||
* **allow_domains (str or list)** – a single value or a list of string containing domains which **will be considered for extracting the links** | * **allow_domains (str or list)** – a single value or a list of string containing domains which **will be considered for extracting the links** | ||
* **deny_domains (str or list)** – a single value or a list of strings containing domains which **won’t be considered for extracting the links** | * **deny_domains (str or list)** – a single value or a list of strings containing domains which **won’t be considered for extracting the links** | ||
+ | * **deny_extensions** | ||
* **restrict_xpaths (str or list)** – is an XPath (or list of XPath’s) which defines regions inside the response where links should be extracted from. If given, **only the text selected by those XPath** will be scanned for links. See examples below. | * **restrict_xpaths (str or list)** – is an XPath (or list of XPath’s) which defines regions inside the response where links should be extracted from. If given, **only the text selected by those XPath** will be scanned for links. See examples below. | ||
* **restrict_css (str or list)** – a CSS selector (or list of selectors) which defines regions inside the response where links should be extracted from. Has **the same behaviour as restrict_xpaths** | * **restrict_css (str or list)** – a CSS selector (or list of selectors) which defines regions inside the response where links should be extracted from. Has **the same behaviour as restrict_xpaths** | ||
Line 524: | Line 528: | ||
................... | ................... | ||
</ | </ | ||
- | === Rule in CrawlSpider === | + | === extract links with linkextractors === |
+ | Extract files in html file which links in **tags=(' | ||
+ | filesExtractor = sle(allow=("/ | ||
+ | links = [l for l in self.filesExtractor.extract_links(response) if l not in self.seen] | ||
+ | file_item = FileItem() | ||
+ | file_urls = [] | ||
+ | if len(links) > 0: | ||
+ | for link in links: | ||
+ | self.seen.add(link) | ||
+ | fullurl = getFullUrl(link.url, | ||
+ | file_urls.append(fullurl) | ||
+ | file_item[' | ||
+ | </ | ||
+ | ==== Scrapy Selector Package ==== | ||
+ | ==== Rule in CrawlSpider | ||
Understand about using Rule in CrawlSpider: | Understand about using Rule in CrawlSpider: | ||
* Contructor Rule:< | * Contructor Rule:< |
crawler/scrapy.1469027318.txt.gz · Last modified: 2022/10/29 16:15 (external edit)