Low Level Documentation
Library Fieldpine Internals Replication Gds Security Fpos.dll Startup logic PosGreen Network PosGreen Install Process Multicast Protocol Data Structures Settings Logic Trace


Security Model Product Distribution Gateways Staff Login Stock Levels Shipping Addresses Agency Stores Remote Reports

PosGreen Engine This engine provides the bulk of the POS processing at checkouts. It can be used directly or via central servers as a remote service.

Contexts & Sessions


Spill Database Debugging

Reference Data

refdata Country refdata SystemString

Spill Database

The spill database is an internal SQL buffer used to hold SQL statements that should have already been applied to the extended database but weren't able to be saved. There is one spill database across all environments, it is not considered part of the per retailer data space.

When a task writes to the extended database, it can rarely fail to do so. The spill database provides a separate database this update/insert can be saved into, and it will be applied on the very next chance to the extended database.

The spill database is called Qe3Spill.db and exists in the same path as Qe3mgmt.db It holds one table "spilldata" which basically contains

  1. RmSystem - Which retailer this belongs too
  2. RVE - date/time this record was added
  3. Urgent - is this spill record urgent and should be applied ASAP, or can it be lazy written
  4. Payload - the SQL to run

Why this is needed

Extended databases typically use Sqlite, which has a major restriction in that "only one writer" is permitted. As the Fieldpine engine is multi threaded, it is very easy to have multiple threads (QeTask) running that need write access. This alone is not an issue and works perfectly most of the time.

The restriction "only one writer" is more nuanced than it appears. If a task starts in read mode, then attempts to upgrade to write mode, it might encounter "database is locked" errors. For example the following code will fail with database is locked

SqliteServices_DbConnection* pConn = SqliteServices::OpenOrCreateDb("test.sqlite");
SqliteServices::RunSimpleSql(pConn, "PRAGMA journal_mode = WAL");
SqliteServices::RunSimpleSql(pConn, "create table aaa (a int, b TEXT)");
SqliteServices::RunSimpleSql(pConn, "insert into aaa (a, b) values (1, 'bob')");

SqliteServices::Cursor cur(pConn, "select b from aaa");
while (!cur.IsEOF()) {
	RString bstr = cur.GetFieldRString_Fast(1);

	auto thread = std::async(std::launch::async, []() {
		SqliteServices_DbConnection* pConn = SqliteServices::OpenOrCreateDb("test.sqlite");
		SqliteServices::RunSimpleSql(pConn, "insert into aaa (a, b) values (2, 'mary')");

    // This next line fails with "database is locked"
	SqliteServices::RunSimpleSql(pConn, "update aaa set b='sue' where a=1");


The above example starts a read transaction. It then starts a separate thread that inserts a row into the database. The original thread then attempts to update a table, but cannot update to write status as the database has changed. Even though we only have "one writer", we cannot have any write changes either.

This is not a bug in SQLite. It is working exactly as advertised, and practically speaking all write transactions should be using "begin immediate transaction" when they start

While it would be possible to solve this problem by careful use of transactions and ensuring that tasks lock the database for write before beginning, this was not enforced on tasks as tasks would be required to write-lock even if they are unlikely to write anything. It is also quite difficult to retrofit this into some tasks.

Implications for Task Code

The following suggestions are not recommended as general advice for applications. They pertain to Fieldpine Tasks only, which have additional constraints not publically documented.

High Write Tasks

If your task is a high probability write function, then allocate your initial database connect using "write mode". This will ensure that your transaction stands a very high chance of success. For example
pTask->OpenExDbWrite(pEnv, pLe)

Rare Writing Tasks

If your task is typically read only, then you can attempt to update to write mod and will likely have minimal issues. If you do, you can insert your SQL into the spill database (using QueryExchange3::WriteToSpillDb)

Invokes many other Tasks

If your task is long duration and meanders over numerous sub tasks (eg processing an inbound email may create a sale record, trigger an account over credit limit, which in turn triggers an email to be queued) then you may like to use DeferTxns. DeferTxns allow you to hand off processing to a future thread rather than running the logic now.

Spill Only?

Tasks can simply write to the spill database and not even try to write the primary database if needed. This might be used when simply recording inbound data records (eg a stocktake count record). This is not recommended as a common method, but is acceptable for some tasks.

Long Running Tasks

Tasks that run for an extended period and update data as they work, such as analysis and statistics functions probably should use the spill database to apply changes. This ensures they do not obtain write locks at all and block other tasks. For example
// select all sales

// Process records being read

// If update to say "customer" is required, use spilldb

// WriteToSpillDb("update customer set status='really good customer' where Cid=NNN")

Implementation detail

Writes to spill database are also enclosed in "begin immediate transaction" and "commit" statements. A write inserts a single record and returns. A spill database write also sets the 'spill pending' bit in the shared environment header. This flag ensures that the next writer will scan and apply spill updates before any other processing.

When tasks ask for a "write connection", the engine checks to see if spill records exist, if so, these are applied before the database connection is returned.