Skip to content

Conversation

@gene1wood
Copy link
Collaborator

@gene1wood gene1wood commented Jun 8, 2018

Create a new GitHubClient class which extends the Client class by modifying
the request method and creating four new methods

  • Establish a new paginate parameter to the constructor
  • Modify the request method to
    • check if there aren't any allowed requests remaining due to ratelimiting
      and if so wait the designated amount of time before making the request.
    • if the paginate parameter is true, look in the payload of the response
      page for a link header and if found
      extend the fetched data by calling the get_additional_pages method
  • Create the no_ratelimit_remaining method to check if there are no
    remaining requests allowed
  • Create the get_additional_pages method which calls the
    get_next_link_url method to determine if there are additional pages. If
    so, fetch the next page and recursively continue fetching pages until the
    last page is fetched
  • Create the get_next_link_url method which parses the link header and
    either returns the next page if there is one or an empty string if not
  • Create the ratelimit_seconds_remaining method which returns the number
    of seconds remaining until the ratelimit is cleared

This new functionality can be used like this

>>> from agithub.GitHub import GitHub
>>> g = GitHub(token='token', paginate=True)
>>> status, data = g.issues.get(filter='subscribed', foobar='llama')

Based on conversations with jpaugh and nnja indicating we're going to redo the unit tests from scratch, I removed my in progress unit tests from this branch and figured I'd submit it now to make it available to folks despite the lack of unit tests.

This fixes #10

Create a new GitHubClient class which extends the Client class by modifying
the `request` method and creating four new methods
* Establish a new `paginate` parameter to the constructor
* Modify the `request` method to
  * check if there aren't any allowed requests remaining due to ratelimiting
    and if so wait the designated amount of time before making the request.
  * if the `paginate` parameter is true, look in the payload of the response
    page for a [link header](https://tools.ietf.org/html/rfc5988) and if found
    extend the fetched data by calling the `get_additional_pages` method
* Create the `no_ratelimit_remaining` method to check if there are no
  remaining requests allowed
* Create the `get_additional_pages` method which calls the
  `get_next_link_url` method to determine if there are additional pages. If
  so, fetch the next page and recursively continue fetching pages until the
  last page is fetched
* Create the `get_next_link_url` method which parses the link header and
  either returns the next page if there is one or an empty string if not
* Create the `ratelimit_seconds_remaining` method which returns the number
  of seconds remaining until the ratelimit is cleared
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

github pagination support

1 participant