好的,可以将代码优化为以下形式:
```python
import requests
import time
import random
from bs4 import BeautifulSoup
from rich.progress import Progress
headers = {
'User-Agent': 'Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/112.0.0.0 Safari/537.36 Edg/112.0.1722.39'
}
def crawl_chapter(book_url, file, titles, session):
response = session.get(book_url, headers=headers)
response.encoding = 'utf-8'
soup = BeautifulSoup(response.text, 'lxml')
content_tag = soup.find(attrs={'id': 'chaptercontent'})
if not content_tag:
return
title_tag = soup.h1
title = title_tag.string.strip()
if title in titles:
return
titles.add(title)
content = content_tag.get_text().strip()
file.write(f'{title}\n\n{content}\n\n')
time.sleep(random.uniform(1, 3))
if __name__ == '__main__':
with open('大秦:秦始皇能听到我的心声最新章节列表.txt', 'a', encoding='utf-8') as f:
titles = set()
with Progress() as progress:
task = progress.add_task("[green]Crawling chapters...", total=471)
with requests.Session() as session:
for i in range(1, 472):
# 网站链接
base_url = 'https://www.qimao5.com'
# 书籍链接
book_url = f'https://www.qimao5.com/book/144498/{i}.html'
# 匹配章节链接的正则表达式
crawl_chapter(book_url, f, titles, session)
progress.update(task, advance=1)
```
这样就避免了重复获取内容的问题,同时将获取title和content的代码分别放在不同的变量中,使得代码更加清晰易懂。