| commit | 97d8e2575ebfc1742adf520c07127e88017e375f | [log] [tgz] |
|---|---|---|
| author | Jordan Cook <JWCook@users.noreply.github.com> | Tue Aug 10 17:51:36 2021 |
| committer | Jordan Cook <jordan.cook@pioneer.com> | Tue Aug 10 18:26:14 2021 |
| tree | da7ba09a8774921a87177068fdea6cf744df71ea | |
| parent | 6f21685411d6ba9c572b08cf9c3e1bd6a3dae1d6 [diff] | |
| parent | c9710a1a15458dc186999e243ef6531ac02c0474 [diff] |
Merge pull request #349 from JWCook/sqlite-clear Update Sqlite clear()
requests-cache is a transparent, persistent HTTP cache for the python requests library. It‘s a convenient tool to use with web scraping, consuming REST APIs, slow or rate-limited sites, or any other scenario in which you’re making lots of requests that are expensive and/or likely to be sent more than once.
See full project documentation at: https://requests-cache.readthedocs.io
requests.Session, or install globally to add caching to all requests functionsFirst, install with pip:
pip install requests-cache
Next, use requests_cache.CachedSession to send and cache requests. To quickly demonstrate how to use it:
This takes ~1 minute:
import requests session = requests.Session() for i in range(60): session.get('http://httpbin.org/delay/1')
This takes ~1 second:
import requests_cache session = requests_cache.CachedSession('demo_cache') for i in range(60): session.get('http://httpbin.org/delay/1')
The URL in this example adds a delay of 1 second, simulating a slow or rate-limited website. With caching, the response will be fetched once, saved to demo_cache.sqlite, and subsequent requests will return the cached response near-instantly.
If you don't want to manage a session object, requests-cache can also be installed globally:
requests_cache.install_cache('demo_cache') requests.get('http://httpbin.org/delay/1')
To find out more about what you can do with requests-cache, see: