A NodeJS script to download all layers within a public or protected ArcGIS Online Feature or Map Service as GeoJSON.
By default the step size is `1000` features. If the feature layer has a limit less than the step size, it fails to cache the entire layer. Suppose the feature layer has an ID range of 1-750 for features, with a record limit of 500. The package will query the starting and ending ID and get `[1, 750]`. It will query that full range, but only receive the first 500 features because of the server limit. The next step, it queries starting at 750, so it missed everything from 500-750. The user of the package may of course specify a custom step size that is 500 or lower, but it would be ideal for the package to catch this case. One solution is to check the layer's max record count before calculating the step size. I think the best solution though may be to just have the loop code start at the last downloaded ID value. In other words instead of this code in the loop: ```javascript ids.start = ids.start + max + 1; ids.end = ids.end + max + 1; ``` It would be: ```javascript ids.start = lastDownloadedId + 1; ids.end = ids.start + max + 1; ``` I think that the query end may not really matter though. It may be simplest to always query for `>= ids.start` and let the max record count return as much as it can each step, which would work when using a `lastDownloadedId` value. A test case for this scenario has been added to PR #21 but is currently commented out.
This issue appears to be discussing a feature request or bug report related to the repository. Based on the content, it seems to be still under discussion. The issue was opened by jwoyame and has received 1 comments.