Webhooks
Bulk Data Downloading
Should you wish to download all available records, and possibly keep your copy "up to date" with minimal overall load, then several options exist
This page will reference "customers" in all examples as that is often the first set of data that grows and becomes too large to handle easily. The methods explained work equally as well for other areas, products, sales, accounts, logs etc
FD1 imposes maximum response sizes for a single response packet. This maximum is large enough for typical use, but may not be for bulk downloads. The limits can vary between server, and cannot simply be increased.
Simple Poll and Download
The obvious first choice is to simply request the data periodically as a single call.
{ a: "fd1.customers.get_customers", rq: 12345 }
{ r: "fd1.customers.get_customers", rp: 12345, data: { rows: [ { ... }, { ... }, { ... }, ... ] } }
This approach is completly fine for small tables (say less than 1000 rows) or where you are infrequently requesting data
You can use "qo" in the request to list exactly what fields you require. This can be faster for the server and reduce network traffic considerably.
{ a: "fd1.customers.get_customers", rq: 12345, qo: { cid: true, name: true, phone: true, email: true } }
Simple Poll and Download using Chunked Responses
This method extends the simple poll and the server can respond with multiple response packets
{ a: "fd1.customers.get_customers", rq: 12345, mo: "chunk" }
{ r: "fd1.customers.get_customers", rp: 12345, ch: 1, data: { rows: [ { ... }, { ... }, ... ] } } { r: "fd1.customers.get_customers", rp: 12345, ch: 2, data: { rows: [ { ... }, { ... }, ... ] } } // repeats until done (ch:0)
This technique can be used for any sized table, and can result in a large amount of network traffic.
If the network fails mid transfer, you will need to restart the whole operation. The chunk numnbers (ch) in the response are not able to be used to restart or request retransmission
Polling for Changes and Download using Chunked Responses
Most data in Fieldpine has a semi hidden field "rve" which is managed by Fieldpine (you cannot set it directly) and is increased whenever the record changes. Think of it like a "last edit date". We use "rve" so that the field name is the same over all tables, and there is no confusion around timezones. From your point of view, you can simply consider it an increasing number.
Sidenote. Fieldpine uses two rve formats, one stored in a double (YYYYMMDD.HHMMSSccc) the other in a 64 bit integer (YYYYMMDDHHMMSScccNN). Within FD1 we are trying to only show the 64 bit integer version, but you may periodically see the double format.
Once you have the current highest RVE in your system, (which can be determined in your system with something like select max(rve) from table) your poll includes that RVE value
{ a: "fd1.customers.get_customers", rq: 12345, mo: "chunk", q: { "_rve(gt)": 2024010113245600023 // Highest value you have } }
The response is identical to the standard poll, but only includes rows where the RVE is "greater than" your current value
We still recommened using chunked response format, as something may happen server side and change the RVE values on millions of rows in a short space of time.