Details
-
New Feature
-
Status: Closed (View Workflow)
-
P2
-
Resolution: Done
-
None
-
-
CP: sprint 75, CP: sprint 76
-
5
-
Core: Platform
Description
The batch endpoints should work as follows:
- the client bundles a set of records into a single JSON array structure called a "batch". It's up to the client to split the entire collection of records into multiple batches.
- the byte size of the of the batch is determined by the client through the Content-Length header with the upper limit decided on the server-side by the module. If the upper limit is reached the module will report a 413 Payload Too Large error to the client. The server may also handle requests with missing or wrong Content-Length and report this error during processing when the limit is reached.
- the DB transaction is opened when the processing of a batch is started and is committed when all records from the batch have been successfully stored in the DB at which point the server responds with 201 Created.
- in case of errors, the server rollbacks the transaction and responds with 422 Unprocessable Entity (all-or-nothing approach)
- the server must use a method to store the records in the DB efficiently: e.g the saveBatch method in the RMB PostgresClient, based on the old vertx pg-mysql driver, or batch from https://vertx.io/docs/vertx-pg-client/java/
Note on the API
mod-inventory-storage already includes partial batch update API called inventory-storage-batch. The semantics of this API are not entirely compatible with what has been proposed above. The new API should be introduced as a new interface, independent of this existing API for backwards compatibility reasons. The new interface should be called inventory-storage-batch-sync (to differentiate from an async API that may be introduced at a later stage)
Not implemented
Content-Length/413 Payload Too Large and using a streaming method without loading the complete batch into mod-inventory-storage memory requires RMB support that doesn't exist yet. Therefore it cannot be implemented within the scope of this issue. It this has been split into RMB-505.
The implementation uses PostgresClient.saveBatch that does not support streaming.
TestRail: Results
Attachments
Issue Links
- blocks
-
UXPROD-1826 APIs for batch uploads (imports)
-
- Closed
-
- is blocked by
-
FOLIO-2050 SPIKE Design batch create / update API endpoint standard
-
- Closed
-
-
MODINVSTOR-385 Upgrade to RMB 28.0.0
-
- Closed
-
-
RMB-433 saveBatch must not overwrite existing id
-
- Closed
-
- is duplicated by
-
MODINVSTOR-294 streaming POST instance
-
- Closed
-
-
MODINVSTOR-295 streaming POST holdings
-
- Blocked
-
-
MODINVSTOR-296 streaming POST items
-
- Closed
-
- relates to
-
MODINVSTOR-458 EffecticeCallNumberComponents is not set for items in "/item-storage/batch/synchronous" API
-
- Closed
-
-
RMB-505 allow to specify batch upload (PUT/POST) size server's max limit
-
- Open
-
-
DEBT-3 Slow or missing batch upload/download APIs
-
- In Review
-
-
MODINVSTOR-478 Implement batch upsert for instances, holdings and items
-
- Closed
-
-
RMB-246 migrate to reactive postgres client (vertx-pg-client)
-
- Closed
-
- mentioned in
-
Page Loading...