The Table API supports fetching large quantities of data. Tables can flexibly filter data of interest and let you pick the exact list of columns for a result. Tables have a much higher limit of 50,000 entries. However, binary data is not unpacked so we recommend you use our SDKs.
Tables store data in tabular form as a set of columns. Each column has a specified type and each row has a unique uint64
row_id. Empty values are represented as JSON
nullor empty strings. Tables can grow extremely large, so it's good practice to use filters and the
columnsquery argument to limit the result size. Table responses are automatically sorted by
row_id. Use client-side sorting if a different sorting order is required.
Please don't treat row ids as globally unique, stable or comparable. Row ids are essentially table sequence numbers that are only unique within a particular version of a table. Lets use two scenarios to explain why that matters:
We use multiple independent indexer instances for scaling the TzPro API to support many concurrent users. Each indexer locally stores its own private database tables. On chain reorgs our indexers roll back history by removing side-chain operations and reverse-updating account balances. That means two different indexers instances may end up seeing two different chain reorg histories. When that happens, these instances will have different row ids for operations and likely also accounts in their individual databases.
The same can happen when we from time to time update the indexer and rebuild databases as result. In this case a previously observed chain reorg will be lost and the database will only contain data about the canonical chain. This means that even the same API instance can return different row ids for the same historic operations when on-chain history was built without knowledge of a reorgs.
As work-around you can use the combination of block
heightand operation positions
op_nif your application requires a unique id other than a hash.
Tables support the following general query parameters.
To paginate result sets larger than the maximum limit, include
row_idinto the list of columns and use the last value of row_id as cursor in your next call. This will automatically apply an extra filter
row_id.gt=cursorfor ascending and
row_id.lt=cursorfor descending order. You can of course also apply the relevant row_id filter directly, without using cursor.
Filter ExampleThe example below filters blocks by time range from
time.lte=2019-08-31(inclusive) and returns columns
height. The same effect can be achieved with the range operator
To filter tables use filter expressions of the form
<column>.<operator>=<arg>. Filters work on any combination of columns regardless of type. For arguments, type encoding rules of the column type apply. Filtering by multiple columns is similar to a logical AND between expressions. For simplicity and performance there are currently no OR expressions or more complex operators available. Comparison order for strings and binary is the lexicographical order over UTF8 (string) or ASCII (binary) alphabets.