Proper API Usage
When preparing an add-on
It is advisable to think in advance about what data you actually need and at what intervals. Many data sets can be obtained efficiently using snapshots without the need to download everything repeatedly, which would unnecessarily burden both your and our hardware resources. At the same time, it is important to distinguish which data needs to be monitored regularly and which can be checked less frequently, for example, using webhooks.
Follow updates and changes in the documentation (API release news)
The API may change, and it is important to regularly follow updates and news in the documentation (API release news). We announce all updates and any BC (backward compatibility) changes well in advance, but it is crucial to keep track of these updates to avoid potential issues caused by lack of information.
Use webhooks
Webhooks regularly send information and, in case of failure, attempt to deliver the notification several more times. However, if some errors occur and notifications do not reach you, it is always a good idea to check whether there are issues with data retrieval. You should have a backup solution in the form of change downloads. We recommend regular integration checks, such as once a day, to verify that everything is working correctly.
Also, don’t forget to check the content of received data – for example, if a webhook reports an order change, but the actual status of the order hasn’t changed, it could be a duplicate notification that does not need processing. Implement control mechanisms to prevent repeated processing of the same data and ensure that this data is consistently verified.
Work with list endpoints (snapshots)
In addition to order details, product details, etc., endpoints also provide lists with basic information. Therefore, it is not always necessary to look up everything in order details, for example – you can obtain data directly from the order list (List of all orders), saving time and reducing API load.
Snapshot endpoints are optimized for batch queries, minimizing the need for individual detail requests. Their processing is asynchronous – upon calling, you receive a jobId
in the request, and after successful processing, you need to call the system endpoint for job details.
To trigger asynchronous requests, you must have the job:finished
webhook registered. If it is not registered, you will receive an error response with status code 403, and the task will not be added to the queue.
The job:finished
webhook is also emitted when a task fails, so it is necessary to check the job details to obtain the task result. If an error occurs during an asynchronous request, the task is automatically marked as failed 3 hours after its creation. In such cases, the job:finished
webhook is not emitted.
Currently, the following snapshots are available for these entities:
- List of all products
- List of all price list details
- List of all orders
- List of all invoices
- List of all proforma invoices
- List of all credit notes
- List of all delivery notes
- List of all proof payments
- List of all customers
- List of all discount coupons
- List of all articles
Use the include parameter and filters
Some endpoints contain so-called include parameters (e.g., Product insertion) in the URL, which extend the response of individual calls and enrich them with additional necessary information. This is especially useful when creating data, as it eliminates the need to call the details of, for example, a newly created product—the data is available immediately within a single call.
Some endpoints also support filtering for easier item search, allowing you to avoid requesting an entire dataset when you only need a specific portion. Don’t be afraid to use filters!
Update data in batches
Several of our POST
and PATCH
endpoints allow processing multiple entities within a single request – therefore, try to send the maximum number of changes at once. Sending a single request with 20 changes is faster than 20 separate requests with one change each. This approach not only speeds up your update process but also reduces API load and minimizes bucket drops in the rate limiter.
Currently available batch updates:
Use change endpoints
For many endpoints, there are also change endpoints, meaning you don’t need to download orders every five minutes. Retrieve them at longer intervals and, in the meantime, use endpoints that handle changes (changes endpoints). This can also be combined with webhooks—rather than making repeated requests, simply track changes and download only new data.
However, it is crucial to approach received data with skepticism—not all changes may be relevant. Verify that you are not dealing with duplicate data you have already retrieved in the past, and ensure that certain fields do not contain unexpected values.
Use the access token correctly
When installing an add-on, it is enough to confirm that you have stored the token and request only the necessary data once it is correctly installed and everything is functioning properly. If you perform multiple steps at once and an installation error occurs, you may reach the token usage limit. In that case, you must wait 30 minutes before you can attempt the installation again.
If you are unsure about the installation, let us know—we can review the entire installation process and assist you in running it correctly.
Process requests in a queue
When retrieving various types of data via requests, consider the method of querying and subsequent processing. It benefits both parties if data is processed in a queue rather than running everything at once.
For example, instead of making parallel calls to multiple endpoints, use batch processing and distribute requests efficiently.
Monitor status codes Errors can sometimes occur in the API, which are reported via status codes.
Codes 5xx
indicate an issue or unexpected behavior on our side.
Codes 4xx
are caused by incorrect request calls—this includes exceeding API limits, which returns the error 429 Too Many Requests
.
It is always a good idea to check for these status codes and, if they appear, pay close attention to them.
If you encounter recurring errors, analyze them and adjust API calls based on the recommendations in the documentation.
Use caching
Proper API usage relies on working with downloaded data. Even if you limit full data downloads and rely on changes and webhooks, some data does not need to be retrieved multiple times a day. These requests can have longer intervals.
For example, category lists and basic product parameters change infrequently, so it is advisable to cache them for an extended period.
Rate Limiter
To protect the server from overload (e.g., DDoS attacks), rate limiting is implemented. This means restricting the maximum number of simultaneous active connections:
- A maximum of 50 from a single IP address.
- A maximum of 3 connections per token.
If the limit is exceeded, an HTTP 429
error is returned.
To monitor the efficiency of your API calls, the leaky bucket algorithm is used. We notify you of reaching limits using the headers X-RateLimit-Bucket-Filling
(in each response) and Retry-After
(only when the bucket is full).
We recommend monitoring these headers and optimizing query intervals based on their fill levels.
Log, log, log!
As Shoptet API evolves and new features and modifications are released, there may be situations where an endpoint behaves slightly differently than expected. In such cases, both you and we will save time if you send us detailed logs.
In general, logging more information is better than logging less. We recommend logging both request
and response
. The more data you provide us, the faster we can detect potential inconsistencies and assist you.
Don’t hesitate to request new features
If you need to retrieve specific information from the API and a relevant endpoint or value is missing, let us know. If certain endpoints are entirely absent, overusing other available endpoints or using them for unintended purposes is not a solution—it may lead to unnecessary API limit issues.
Don’t hesitate to reach out!