Skip to main content
Version: Current

Tables - advanced

Automatically generated sequences

There are two keywords that enable you to define a field that automatically generates a sequence number (such as you would need for identifying a new trade or a new order):

  • autoIncrement, which simply generates an integer value in sequence; by default, the first number generated will be 1
  • sequence, which generates either a value in a Genesis format (see below), depending on the database technology you are using
  • uuid, which generates a UUID

autoIncrement is available for LONG and INT fields; sequence and uuid are available for STRING fields.

info

For all new development, we recommend that you use autoIncrement fields rather than sequence or uuid.

There is no known efficiency between one keyword and the other. But autoIncrementis easier to define and generates a simple integer with no padding or leading zeroes; it keeps things simple.

Example of using autoIncrement

table(name = "TRADE", id = 2000) {
field("TRADE_ID", LONG).primaryKey().autoIncrement()
field("QUANTITY")
field("PRICE", DOUBLE)
field("SYMBOL")
field("DIRECTION", ENUM("BUY", "SELL"))
}

Default format when using sequence

When you use the keyword sequence to generate sequence numbers, the value generated is actually a combination of sequence number and other parameters defined in your application's System definition. For example, a table with the field TRADE_ID defined as:

table(name = "TRADE", id = 2000) {
field("TRADE_ID").primaryKey().sequence("TR")
field("QUANTITY")
field("PRICE", DOUBLE)
field("SYMBOL")
field("DIRECTION", ENUM("BUY", "SELL"))

}

will generate TRADE_ID fields in the following format: SEQUENTIAL_VALUE (padded by paddingSize) + SEQUENCE + LOCATION + 1 (fixed)

Using the default settings (see details below), the first value generated for the field defined above will be: 000000000000001TRLO1

Using sequence with an SQL database (UUID)

If you are using an SQL database, sequence behaves differently by default; it generates a UUID.

To switch off UUID generation and use the default Genesis format for sequence:

  1. Add or modify the following keys in the file site-specific/genesis-system-definition.kts:
ItemDefault valueDescription
Location'LO'
SqlSequencePaddingSize15The number of zeros used to pad the sequential number.
SqlEnableSequenceGenerationfalseMust be set to true.
  1. Run genesisInstall to apply the settings.

  2. If you have any new or changed sequences, run remap to generate any new database sequences.

  3. If there are existing fields using sequences, run the server command CreateMissingSqlSequences.

  4. If you need to adjust the initial value of the sequences (for example if you are migrating data), use the server command SetAutoIncrement.

Field definition metadata

Metadata within the field definition provides additional characteristics for your schemas. These are available on both the front-end and back-end applications, enhancing the shared experience. These come in handy particularly with table entities and their related events.

General metadata

  • title: This property can be used with all fields. It names the field from a front-end perspective. While every field has a default title (derived from the field name converted to title case, e.g. MY_STRING_FIELD becoming My String Field), you can specify a custom title. This custom title only changes how the field name appears on the front end; it does not affect the back end.

For example, the code below assigns the title A very exciting field to the field MY_FIELD

field("MY_FIELD").metadata {
title = "A very exciting field"
}

Validation metadata

There are type-specific validation options that work both on the front end and back end applications. For instance, you can set the maximum value for a numeric field.

However, these validation checks are only triggered during events. Any modifications to these properties do not affect pre-existing records.

Numeric field metadata

Numeric fields (except for BIGDECIMAL) can use the following metadata properties:

  • minspecifies the minimum value for the numeric field.
  • maxdefines the maximum value for the numeric field.
field("MY_INT", INT).metadata {
max = 100
min = 0
}

String field metadata

These properties are specific to string fields:

  • pattern sets the regular expression pattern that the string field values should adhere to.
  • minLength declares the minimum number of characters allowed in the string field values.
  • maxLength encapsulates the maximum number of characters allowed in the string field values.
field("MY_FIELD").metadata {
pattern = "\\w+"
minLength = 2
maxLength = 20
}

Event integration and field modifiers

Sometimes, there is a need for insert and modify events for tables that may exclude certain fields. For example, you might have read-only fields that should be available during insertion but not modification. Or, you might have non-user input fields that capture the username or the current date/time, for example.

To address these requirements, Genesis provides field definition modifiers. Usage of these triggers the creation of insert and modify event classes for the table.

The creation of events is triggered when any of the below modifiers are used:

ModifierField TypeEffect
.username()STRINGThe field value gets set to the username of the user submitting the event.
.timestamp()DATEThe field value is set to the current date.
.timestamp()DATETIMEThe field value gets set to the current date and time.
.readonly()anyThe field is available during insertion but not modification.
.applicationProvided()anyThe field must be provided programmatically.

You can also combine these; for example, the code below only sets the username during the insert event:

field("CREATED_BY").username().readonly()

There are benefits to using these events over custom ones:

  • Events synchronise with your table definition: fields are added/removed automatically.
  • Metadata on fields is kept in sync.
  • Unless application-provided fields are modified, table changes require no changes in the event definition.

Here's an example:

table(name = "TRADE", id = 11002) {
field("ID", INT).primaryKey().autoIncrement()
field("INSTRUMENT_ID").readonly()
field("QUANTITY", INT).metadata {
min = 0
}
field("PRICE", DOUBLE).notNull().metadata {
min = 0.0
}
field("NOTIONAL", DOUBLE).metadata {
min = 0.0
}
field("LAST_UPDATED_BY").username()
field("TRADE_DATE", DATE).timestamp().readonly()
}

Insert event

In the definition above, there will be TradeInsert insert event created. The input includes the following:

  • INSTRUMENT_ID
  • QUANTITY
  • PRICE
  • NOTIONAL

Fields having minimum values defined (QUANTITY, PRICE, NOTIONAL) will be validated automatically. The LAST_UPDATED_BY field automatically gets populated with the current user, and TRADE_DATE will capture today's date.

The insert event will look like:

eventHandler<TradeInsert> {
onCommit {
val entity = it.toEntity()
entityDb.insert(entity)
ack()
}
}

Any application-provided fields would have to be supplied as parameters to the toEntity() function in the handler.

Modify event

Similarly, a TradeModify event is created. This takes the input:

  • ID
  • QUANTITY
  • PRICE
  • NOTIONAL

The INSTRUMENT_ID field is read-only and hence not available for modification. The TRADE_DATE is also read-only, so will not be updated; but the LAST_UPDATED_BY field will be current username.

The modify event will look like this:

eventHandler<TradeModify> {
onCommit {
val modifyDetails = it.toModifyDetails()
entityDb.modify(modifyDetails)
ack()
}
}

As with the insert event, any application-provided fields would need to be supplied as parameters to the toModifyDetails() function in the handler, unless those fields are read-only.

Subtables

Within the body of the table definition, you can use subtables to define one or more subtables. A subtable provides a unique point of view on the data schema, which goes further than a simple join relationship. It gives extra functionality to a main table.

For example, you might have an EXECUTION_VENUE table to provide details of different exchanges and trading venues. This table on its own probably cannot represent all the possible symbologies for an exchange. So, you could add a subtable called ALT_VENUE_CODE, in which the relationship is one-to-many from EXECUTION_VENUE_ID to ALT_VENUE_CODE.

ALT_VENUE_CODE inherits key fields from the EXECUTION_VENUE table, and it simply acts as a lookup table for EXECUTION_VENUE records.

The example below shows this.

After the fields and the primary key have been defined, you can see the subtable ALT_VENUE_CODE.

  • The EXECUTION_VENUE_ID field is used to generate the join operation. This field is inherited automatically.
  • Then the additional fields ALT_VENUE_CODE and ALT_VENUE_CODE_TYPE are defined.
  • Then the key for the subtable is defined.
    table(name = "EXECUTION_VENUE", id = 5043) {
field(COUNTRY_CODE)
field(OPERATING_MIC)
field(DESCRIPTION)
field(EXECUTION_VENUE_ID)

primaryKey("EXECUTION_VENUE_ID")

subTables {
fields("EXECUTION_VENUE_ID")
.joiningNewTable(name = "ALT_VENUE_CODE", id = 5044) {
field("ALT_VENUE_CODE")
field("ALT_VENUE_CODE_TYPE")

primaryKey("EXECUTION_VENUE_ID", "ALT_VENUE_CODE_TYPE")
}
}
}

Some tables provided by the platform have subtables. The example below shows the GENESIS_PROCESS monitoring table, which includes a subtable called GENESIS_PROCESS_MONITOR.

table(name = "GENESIS_PROCESS", id = 12) {
field(PROCESS_NAME)
field(PROCESS_STATUS, ENUM("GOOD", "BAD"))
field(PROCESS_STATUS_MESSAGE)
field(PROCESS_STATE_TEXT)
field(PROCESS_RESOURCES)
field(PROCESS_HOSTNAME)
field(PROCESS_CPU_USAGE, DOUBLE)
field(PROCESS_MEM_USAGE, DOUBLE)
field(PROCESS_SECURE, BOOLEAN)
field(PROCESS_PORT, INT)
field(LOG_LEVEL)
field(DATADUMP, BOOLEAN)
field(START_TIME, DATETIME)
field(RESOURCE_TYPES)

primaryKey("PROCESS_NAME", "PROCESS_HOSTNAME")

subTables {
fields("PROCESS_HOSTNAME", "PROCESS_NAME")
.joiningNewTable(name = "GENESIS_PROCESS_MONITOR", id = 20) {
field("MONITOR_NAME")
field("MONITOR_MESSAGE", STRING(4_000))
field("MONITOR_STATE", ENUM("GOOD", "BAD"))

primaryKey("PROCESS_HOSTNAME", "PROCESS_NAME", "MONITOR_NAME")
}
}
}

Audit tables

If you want to be able to track the changes made to a table, you need an audit table. This is useful, for example, if you want to:

  • keep track of the changes to trades, with times and modifications, to provide an accurate record of events
  • be able to roll back to a specific time following an outage

Audit tables don't exist in isolation. They exist to audit activity on a source table. To create an audit table, you simply need to include the parameter audit = details() when you define the source table itself. The syntax is:

table (name = <TABLE_NAME>, id = <TABLE_ID>, audit = details(id = <TABLE_NAME_AUDIT_ID>, sequence = <TABLE_SEQUENCE>, tskey = <TRUE_OR_FALSE>))
...

The following details can be added when you use the audit = details() parameter:

Parameter nameTypeDescription
idIntegerUnique id to identify the audit table
sequenceSTRINGUnique name to identify its sequence
tskeybooleanSet a timestamp index

The audit table that is created has the same name as the source table, plus the suffix _AUDIT. Here is an example table definition, which creates a table called EUR_TRADES and an audit table called EUR_TRADES_AUDIT:

table (name = "EUR_TRADE", id = 2000, audit = details(id = 2100, sequence = "EA")) {
field("TRADE_ID").sequence("TR").primaryKey()
field("COUNTERPARTY_ID")
field("INSTRUMENT_ID").notNull()
field("QUANTITY", INT)
field("PRICE", DOUBLE).notNull()
field("TRADE_DATE", DATE)
field("TRADE_STATUS", ENUM("ACCEPTED", "CANCELLED"))
}

Whenever an eventHandler interacts with the table EUR_TRADE, a record is automatically inserted to the audit table EUR_TRADES_AUDITto log the event.

So, to be clear:

  • An audit table monitors and logs details of changes on a specific table.
  • If the table and the eventHandlers that interact with it are set up correctly, then the auditing is performed automatically.
  • Every change to the table is automatically logged and timestamped on the audit table.

Structure of audit tables

When you create an audit table, it has all the same fields as the source table, plus the following fields:

Field nameData TypeDescription
AUDIT_EVENT_DATETIMEDATETIMEAutogenerated date and time of the event
AUDIT_EVENT_TEXTSTRINGOptional “REASON” value sent as part of the event message
AUDIT_EVENT_TYPESTRINGEvent that wrote on the source table
AUDIT_EVENT_USERSTRINGUser on the event message

ACID

Every eventHandler that interacts with the audited table must be transactional. This ensures ACID compliance.

Here is a simple example of a transactional eventHandler:


eventHandler<Company>(name = "COMPANY_INSERT", transactional = true) {
onValidate {
ack()
}
onCommit { event ->
val company = event.details
entityDb.insert(company)
ack(listOf(mapOf("VALUE" to "SUCCESS!")))
}
}