Skip to content

Latest commit

 

History

History
1975 lines (1500 loc) · 81.8 KB

019.data.md

File metadata and controls

1975 lines (1500 loc) · 81.8 KB

System.Data - Data Entity FrameWork

System.Data library provides several features:

  • The database-oriented Abstract Data Entity Framework, which provides a mapping between code data types and database tables to simplify database operations as much as possible, effectively reducing the use of SQL in programs and avoiding problems such as code refactoring difficulties.

  • Cache-oriented base class library, because the use of cache databases have their own characteristics, cache base classes and interfaces only declare common methods.

  • Bind the cache to the database, automatically cache database processing results, improve query speed.

Contents.

Database operation example

The first is an example of manipulating a database under PLoop's data entity architecture (pseudo-code, data definitions to be described later).

require "PLoop.System.Data" -- the PLoop library will be loaded by the dependency.

PLoop(function(_ENV)
    import "System.Data"

    -- insertion of data
    function AddUser(name, telno)
        -- UserDataContext is the database context that corresponds to the user's processing.
        with(UserDataContext())(function(ctx))
            -- ctx.Transaction is a context-provided database transaction.
            -- update data, a transaction must be initiated
            with(ctx.Transaction)(function(trans)
                -- set of data from the user table of ctx.Users corresponding database.
                -- Add to add a line of data
                local user = ctx.Users:Add{
                    name = name,
                    telno = telno,
                    register = Date.Now,
                }

                -- commit to database for data changes
                ctx:SaveChanges()

                -- id corresponds to the self-added field of the database, so it is not necessary to set the
                -- after saving the data to the database, the id will be read into the database
                -- the value automatically generated for it, which we can use directly.
                print("New User ID", user.id)
            end)
        end)
    end

    -- Reading and updating data
    function UpdateUser(id, name, telno)
        with(UserDataContext())(function(ctx))
            -- Generally, when updating data, the query is used to go to the primary key or index, which can directly
            -- with query table, the query result is a List object.
            -- we just need to get the first object with First.
            local user = ctx.Users:Query{ id = id }:First()
            if not user then return end

            with(ctx.Transaction)(function(trans)
                -- Update data, need to lock data rows first
                user = ctx.Users:Lock{ id = id }:First()
                user.name = name
                user.telno = telno

                ctx:SaveChanges()
            end)
        end)
    end

    -- delete data
    function UpdateUser(id, name, telno)
        with(UserDataContext())(function(ctx))
            with(ctx.Transaction)(function(trans)
                local user = ctx.Users:Lock{ id = id }:First()

                if not user then
                    -- Active rollback required when changes are not intended to be submitted
                    -- otherwise commit processing is done when closing a transaction
                    return trans:Rollback()
                end

                -- marks itself as deleted
                user:Delete()

                ctx:SaveChanges()
            end)
        end)
    end
end)

Database contexts and data entities

The framework maps databases and data processing at three levels.

  • DataBase <-> DataContext, usually in one database operation, we need to read and write multiple database tables, we need a manager to manage the data objects created in this process and track all their modifications and be able to commit all the modifications to the database successively by data dependencies.

    For the code, we map database tables to data collections, database tables contain multiple rows of data, each row of data corresponds to a data object, so we can consider the database table is a collection of data objects, we search for data, get objects from the collection that meet the conditions, add, delete and modify are operations on the collection of data.

    For this purpose, PLoop provides the Database Context Class, each database context will contain multiple data collections, it does not need to contain all the tables in the database, it usually only needs to contain a few tables that are associated.

  • DataTable <-> EntityClass, the database table structure is fixed and is used to declare field types that correspond well to classes with attributes.

    Note that this refers to the structure of the database table, not to the database table (and the rows it contains).

    Usually we will define a data entity class in the database context class, the system will automatically generate a corresponding data set of attributes, the elements of the set is the data entity class, then we can manipulate the data through the data set.

    In the above example, UserDataContext.User is the entity class corresponding to the structure of the user table (although it will not be used directly, but the generated user objects are generated with this class), and UserDataContext.Users is the data collection corresponding to the user table, which is also the entry point for us to add, delete, check and change. Please distinguish between these two concepts.

  • DataField <-> Property, database table fields and class properties can be one-to-one correspondence, but it should be noted that the field type and property type can not be equated, the data read from the database and write to the database need to be converted, usually by the system automatically.

    We map the fields to attributes and each row of data to objects of the data entity class. Each database table corresponds to a collection of data held by database context objects, and the data objects read from these collections can be tracked by the context objects to which these collections belong, so that they can be committed to the database at once when SaveChanges is called.

    Note that, unlike the C# Data Entity Framework, the database context here does not cache the query results, so you can't use a context object for a long period of time, and it needs to be closed when the processing is finished.

So, we formally implement the UserDataContext context class used in the above example:

require "PLoop.System.Data"

PLoop(function(_ENV)
    import "System.Data"

    -- defines a constructor-only database context for providing database links, here
    -- using direct settings, which can actually be used with the [Configuration System] (014.configuration.md).
    -- place configuration in other specialized files
    __DataContext__()
    class "ProjectDataContext" (function(_ENV))
        function __ctor(self)
            self.Connection = MySQLConnection {
                host = "127.0.0.1",
                port = 3306,
                database = "TestDB",
                user = "root",
                password = "xxxxxxx",
            }
        end
    end)

    -- database mapping, since the database links are variable, usually set through the configuration table.
    -- only the database context needs to be declared here.
    __DataContext__()
    class "UserDataContext" (function(_ENV))
        inherit "ProjectDataContext" -- inherits the basic database context without having to configure the database link yourself

        -- mapping database tables and classes
        __DataTable__{
            -- name declares the database table name, or class name if not entered.
            name = "user",

            -- Must provide primary key and index information, at least primary key, for system optimization
            -- and some special features provided
            -- indexes can have multiple groups, so the value of indexes is an array construct type
            indexes = {
                -- primary key declarations, since primary keys can be multiple, and the values offields are also array constructs.
                -- primary means this index is primary.
                { fields = { "id" }, primary = true },
                -- unique indicates that both indexes are unique.
                { field = { "name" }, unique = true },
                { fields = { "telno" }, unique = true },
            }
        }
        class "User" (function(_ENV))
            -- mapping table fields and class attributes, autoincr indicates that this is a self-increasing database field.
            -- You can specify the field name by name, which is the attribute name by default.
            __DataField__{ autoincr = true }
            property "id" { type = NaturalNumber }

            __DataField__{ notnull = true, unique = true }
            property "name" { type = String }

            __DataField__{ notnull = true, unique = true }
            property "telno" { type = String }
        end)

        __DataTable__{
            name = "login",

            indexes = {
                { fields = { "id", "login" }, primary = true },
            }
        }
        class "Login" (function(_ENV))
            __DataField__{ autoincr = true }
            property "id" { type = NaturalNumber }

            __DataField__{ notnull = true }
            property "login" { type = Date }

            -- foreign key association, the type User of the property specifies that the associated master table is a User
            in -- foreign, map defines the mapping of foreign and primary keys.
            -- the key is the field of the foreign key (not the attribute name, although usually the attribute name is the same as the field name).
            -- value is the primary key field of the master table (not the attribute name)
            -- After the definition, you can use `login.user` to get the corresponding User object directly from the Login object.
            ---
            -- foreign's link definition adds a new attribute `LoginTimes` to the main table User.
            -- because the id as a foreign key is not unique (there's also a login field).
            -- so when an object of the main table class accesses `LoginTimes`, it will fetch all of its own login data.
            -- these data are converted into objects of the `Login` class and then saved in the List.
            -- e.g. `user.LoginTimes:Each(function(lg) print(lg.login) end)`
            -- name specifies the name of the attribute added to the User.
            -- order specifies the order of the object list, sorted by the login field using desc.
            -- more detailed description in later subsections
            __DataField__{
                foreign = {
                    map = { id = "id" },
                    link = { name = "LoginTimes", order = { name = "login", desc = true }
                }
            }
            property "user" { type = User }
        end)
    end)

    __DataContext__()
    class "DepartmentDataContext" (function(_ENV))
        -- Database context can also be inherited from other contexts
        -- can be similar with(DepartmentDataContext())(function(ctx) return ctx.Users:Query{ id = 123 }:First() end)
        -- access to data entities of the parent class
        inherit "UserDataContext"

        __DataTable__{
            name = "depart",

            indexes = {
                { fields = { "id" }, primary = true },
            }
        }
        class "Department" (function(_ENV))
            __DataField__{ autoincr = true }
            property "id" { type = NaturalNumber }
        end)
    end)
end)

The implementation of MySQLConnection used above can be found in [NgxLua][] for the [Openresty][] server. It is also usually possible to define the database connection constructs in the constructor methods of the context classes.

System.Data.IDataContext

When the __DataContext__ attribute is used, this class automatically extends the IDataContext interface, which extends IAutoClose, which can be used with the with keyword. This interface provides the properties and methods of the database context class.

Property Parameter Description
Connection System.Data.IDbConnection Read and Write Database Connection
Transaction System.Data.IDbTransaction read-only, used to retrieve in-use or newly created database transactions.
Methods Parameters Description
Open Open Database Connection
Close Close Database Connection
SaveChanges Save All Data Changes
Query sql: String, ... : Any * 0 Execute the request after generating sql using formatted strings and parameters and return the result as List
Execute sql: String, ... : Any * 0 Execute after generating sql using formatted strings and arguments, and return the result.
QueryAsView viewcls:-System.Data.IDataView, sql: String, ... : Any * 0 perform processing after generating sql using formatted strings and parameters, return each row of data using the specified view class after generating the object, save in List return
QueryView viewcls:-System.Data.IDataView, ... : Any * 0 use the view class's own query format string, along with the parameters to generate sql after execution of the request, the request results of each line of data using the view class to generate the object, save in the List return

Both Query and Execute are used to support pure SQL, because for some complex queries, using SQL is simpler and more straightforward than providing complex function chain calls, but using the results of SQL queries directly is not conducive to using the code directly (e.g. date strings need to be converted to date objects to be used correctly, etc.).

-- Pseudo-code, not directly enforceable
local ctx = UserDataContext()
ctx.Connection = MySQLConnection{...}

ctx:Open()

local result = ctx:Query("SELECT * FROM user where id = %d", 100)
result:Each("u=>print(u.name)")

-- table will use "," splicing, which is handled by MySQLConnection and is not provided by the Abstract Data Entities framework.
result = ctx:Query("SELECT * FROM user where id in (%s)", { 1, 2, 3, 4, 5 })

ctx:Close()

The difference between Query and Execute is that Query results in a List containing all the data rows (even if there is no result, an empty List is returned), while Execute returns the results returned by the database directly to the caller.

These two APIs are used more by the underlying framework, as they return ordinary Lua data that isn't converted to objects of the corresponding class, and the context objects can't track changes to that data. We actually use data collection queries instead of SQL directly to ensure that we can generate objects by class and track them.

The use of QueryAsView and QueryView will be covered in a moment.

System.Data.IDataEntity

A class tagged with the __DataTable__ attribute automatically extends the IDataEntity interface to indicate that it is a mapping of the structure of a particular database table.

The object of this class is called a data entity and is an encapsulation of each row of data of the database table. The __DataTable__ feature is used to bind the class to the specified database table (of the structure), so it only cares about the table's settings, and it only accepts the value of the DataTableSetting structure, which is defined as follows.

-- Definition of the index structure, usually each table contains multiple sets of indexes for optimizing the query, mapping, and the need for the
-- provides index information to help optimize the system
struct "DataTableIndex" {
    -- index name, the framework does not care about the index name, it is not needed in the actual system.
    -- but if you plan to use Code First's design pattern, providing it helps with the database
    { name = "name", type = String }

    -- whether the index is unique to unique
    { name = "unique", type = Boolean }

    -- whether the index is a full-text index
    { name = "fulltext", type = Boolean }

    -- whether the index is the primary key
    { name = "primary", type = Boolean }

    -- a list of indexed fields, order is critical, especially for Code First design, where
    -- Database table fields are entered, not attribute names (although a consistent design is usually adopted)
    { name = "fields", type = struct { String } }
}

-- Database table settings, this as a parameter to __DataTable__.
struct "DataTableSetting" {
    -- table name, use class name as table name if omitted
    { name = "name", type = String }

    -- an array of indexes with element type DataTableIndex as defined above.
    { name = "indexes", type = struct { DataTableIndex } } }

    -- remember that the classes defined here map the structure of the database table, not the database table itself; typically, the Data Entity class
    -- needs to be defined in the database context class, saved as a subordinate type, and the database context will be based on this entity class.
    -- adds a property for itself, usually named Data Entity Class Name + "s".
    -- after accessing this property, it returns a collection of data through which we can read, add data entities.
    -- This is used to manipulate data from database tables. For example, in the above example, `ctx.Users` is a collection of data that corresponds to a
    -- The Data Entity class is the UserDataContext.User class.
    --
    -- You can override the default by specifying the attribute name of a data set using a collection if you have a specific need, but usually the
    -- no such need.
    { name = "collection", type = String },

    -- the engine used for database tables, the settings are not useful for frameworks and are usually used in Code First design patterns.
    -- provides engine settings for database table construction.
    { name = "engine", type = String }
}

The members of DataTableSetting are not required, but please provide as much information as possible if you can provide an index. The system can be optimized based on the index information.

The IDataEntity provides a number of methods available, but for non-framed parts, the Delete method is usually all that is needed.

Method Description
Delete Mark itself as deleted, and then submit it to the database along with the context object.

Mapping of table fields

The table structure consists of fields and field types, which correspond to the attribute names and attribute types of the data entity class. Note that when defining properties mapped to fields for data entity objects, please only declare the attribute type and do not set Get/Set accessors etc. You should leave this processing to the system.

When defining properties for mapped fields, we need to use the System.Data.__DataField__ feature to tag the properties (data entity classes can define their own properties, not all of them need to be mapped to database table fields). Similar to the __DataTable__ feature, it also requires a parameter of a specific struct type System.Data.FieldSetting that provides information about the field.

-- field ordering
-- { name = "id", desc = true } -> "ORDER BY id DESC"
struct "QueryOrder" {
    { name = "name", type = String, require = true },
    { name = "desc", type = Boolean }
}

-- Field Sorting Array
-- { { { name = "id", desc = false }, { name = "age", desc = true} } -> "ORDER BY id, age DESC"
struct "QueryOrders" { String + QueryOrder }

-- a converter of database value types and program value types, usually provided by the system for specific types.
-- default converters, e.g. Date<->DATETIME, Boolean<->TINYINT
-- no need to specify the converter in the field if you do not need to override the default behavior
struct "TypeConverter" {
    -- data type --> code type
    { name = "fromvalue", type = Function, require = true },

    -- code type --> date type
    { name = "tovalue", type = Function, require = true },

    -- passed into the above two functions as a second argument along with the value, generally defined in the format
    { name = "format", type = Any }
}

-- configuring the associated properties for the foreign key's corresponding master table declaration
struct "PrimaryLink" {
    { name = "name", type = String, require = true },
    { name = "order", type = String + QueryOrders },
}

-- foreign key settings
struct "ForeignMap" {
    -- mapping of foreign keys to the primary key of the master table, with the foreign key as the key and the primary key of the master table as the value, note that both are field names.
    { name = "map", type = Table, require = true },

    -- the associated attribute configuration of the master table, if only the string is declared, the string as name.
    -- then the query does not specify Order
    { name = "link", type = String + PrimaryLink }
}

-- field configuration constructs
struct "FieldSetting" {
    -- field name, attribute name by default
    { name = "name", type = String }

    -- database type of field, not used by framework, only used in Code First design mode
    -- for updating the database
    { name = "type", type = String },

    -- whether unique or not.
    { name = "unique", type = Boolean }

    -- whether or not it is a self-added field, values are automatically generated by the database
    { name = "autoincr", type = Boolean },

    -- whether it cannot be empty
    { name = "notnull", type = Boolean },

    -- foreign key setting
    { name = "foreign", type = ForeignMap },

    -- data type converters, which are usually sufficient if the framework is provided by default.
    { name = "converter", type = TypeConverter },

    -- the format used for data type conversion, usually the one provided by the framework by default.
    { name = "format", type = Any },
}

Although there are many configuration items, but many times, especially when DataBase First design pattern, we do not need to provide most of them, such as foreign keys, although the code is convenient to use, but if you limit the use of foreign keys, you can also completely bypass the use of the Foreign keys such as unique, type, and nonull are not necessary, but they can be used for more checks and optimizations.

__DataField__()
property "name" { type = String }

You can now declare the properties of the mapping field. Foreign keys can be referred to in the original example, but it is also a good design choice not to use foreign keys.

Access to data

In the above example, we have defined the database context UserDataContext, which defines two classes User and Login, which map to the user and login tables of the database respectively.

We can use the Users and Logins properties of the database context object to access the database table corresponding to the data set. First let's look at the data queries (database connections are provided by default in the construction methods of the context classes).

with(UserDataContext())(function(ctx))
    -- Get object by specifying parameters on table
    local user = ctx.Users:Query{ name = "ann", telno = "13412345678" }:First()

    -- all logging time records are removed via the master table property declared by the foreign key.
    for _, lg in user.LoginTimes:GetIterator() do
        print(lg.user.name, lg.login) -- access to master table via foreign key bound properties
    end

    -- The table can only be used for exact matching, which is sufficient for additions and deletions, but the query is relatively complex.
    -- to simplify, List-like chain operations, the framework provides a separate set of chain processes for queries.
    -- where Where processing can be done directly using SQL, without the framework having to provide more complex processing.
    -- In addition, since the query result is saved as an object of the corresponding entity class, JOIN is not supported.
    -- this is also for performance reasons, and if JOIN processing is required, the QueryView, described later, is more suitable!
    ---
    -- SELECT * FROM user WHERE id > 12 ORDER BY name DESC, id OFFSET 30 LIMIT 10
    local users = ctx.Users:Where("id > %d", 12):OrderBy("name", true):OrderBy("id"):Offset(30):Limit(10):Query()
end)

ctx.Users will get a data collection, the collection's element type is UserDataContext.User class, the type of the data collection object is System.Data.DataCollection[TestDBContext.Department]. The DataCollection template class is optimally generated based on the incoming data entity class.

The generated class provides the basic methods for data collection operations.

method parameter description
Query condition: Table, order:QueryOrders/nil query according to sorting and exact query conditions, the data obtained is saved as an object of the data entity class, stored in the list to return
Query sql : String, ... : Any * 0 directly using SQL query, the data obtained is saved as an object of the data entity class, and stored in the list to return
Lock condition: Table, order:QueryOrders/nil queries and locks data according to sort and exact query conditions, the data obtained is saved as an object of the data entity class and stored in the list to return
QueryAll order:QueryOrders/nil query all the data according to the sort, the data obtained is saved as a data entity class object, stored in the list to return to the
Add data: Table encapsulates the specified data table as an object of the data entity class, adds the data collection, and waits for the final commit to the database.

The Query method of a data set is based on the Query of the database context, converting each row of data into a corresponding object in the results it returns. When the query conditions are more complex, it is easier to use SQL, but to minimize the possibility of SQL statements appearing in the code, the data collection object can be processed using another set of chaining.

Queue Methods Parameters Description
Where condition: Table add exact query conditions for the query.
Where condition: String, ... : Any * 0 custom query conditions for the query conditions, Where is not needed in condition, just add the query conditions.
OrderBy name: String, desc: Boolean/nil order by query condition, can be called more than once, so that you can specify multiple order conditions
Limit limit: NaturalNumber set the limit number of the query.
Offset offset: NaturalNumber set the offset number of the query.
Final Method Parameters Description
Lock Lock and return the result set
Query return query result set

Here are more use cases.

-- SELECT xxx FROM item WHERE createdate = '2019-05-17' ORDER BY id DESC LIMIT 10 OFFSET 30
ctx.Items:Where{ createdate = Date.Now:AddDays(-1) }:OrderBy("id", true):Limit(10):Offset(30):Query()

-- SELECT xxx FROM item WHERE id in (1, 2, 3)
ctx.Items:Where("ID in (%s)", {1, 2, 3}):Query() -- converts attribute ID to field id

The purpose of allowing the use of attribute names in query conditions rather than having to use field names is also to make the query conditions more logical to the code, and while we must follow the SQL syntax of the corresponding database, this usually does not cause problems.

Although the query is relatively complex, for additions and deletions, the data set is quite convenient.

Creation of data

To avoid failure during processing, use with to manipulate database context objects.

-- Add a user
function addUser(name, telno)
    local user

    with(UserDataContext())(function(ctx))
        with(ctx.Transaction)(function(trans)
            user = ctx.Users:Add{
                name = name,
                telno = telno,
            }

            -- save the data first so that the user can have the id value.
            ctx:SaveChanges()

            local login = ctx.Logins:Add{
                -- can also be used if id = user.id
                user = user,
                login = Date,
            }

            -- Save initial login information
            ctx:SaveChanges()
        end)
    end)

    return user
end

Note that the corresponding property of the foreign key can be set directly on the main table object, so that the foreign key will get the value of the primary key of the main table. This is also useful for subsequent association operations.

Updating the data

To refine the data, a transaction must be started and then Lock the data before it can be updated, so that changes to the data entity object are tracked by the system.

function UpdateUserName(telno, name)
    with(UserDataContext())(function(ctx))
        with(ctx.Transaction)(function(trans)
            local user = ctx.Users:Lock{ telno = telno }:First()

            if user the
                user.name = name
                ctx:SaveChanges()
            else
                -- must be rolled back to avoid being submitted when no processing is done
                trans:Rollback()
            end
        end)
    end)
end

Deletion of data

function deleteUser(id)
    if not id then return end

    with(UserDataContext())(function(ctx))
        with(ctx.Transaction)(function(trans)
            local user = ctx.Users:Lock{ id = id }:First()

            if user the
                -- this just marks itself for deletion.
                user:Delete()

                -- Submission of deletion operations
                ctx:SaveChanges()
            else
                trans:Rollback()
            end
        end)
    end)
end

Automatic conversion of data types

Usually the result set from the database query, the data types are common strings and numbers (if the library used by the database does simple parsing), but in the program, we will also use types such as Boolean, Date, etc., which is too time consuming and error prone if we rely on the developer to convert them themselves every time.

The data entity object uses attributes to read and write field values, and the read and write are controlled by the framework, so, in this read and write process, the system can intervene to handle the business data types and database field types to convert each other.

In the __DateField__ setting above, we can specify converter and format so that the values can be automatically converted between the business data type and the database field type. We only need to focus on the business data type, not the specific definition of the database field.

Usually we only need to provide conversions for two types, Date and Boolean, and the conversions are fixed, so the framework registers default converters for these two types without declaring them in __DataField__, and if needed, you can register your own default converters for the corresponding types, which will be explained later.

In the above Login table definition, the Date type is used, as you can see from the example, we can use Date.Now to assign a value to it directly, or we can use more date operations:

-- add login history, allow mod to modify seconds
function addLogin(id, mod)
    with(UserDataContext())(function(ctx))
        with(ctx.Transaction)(function(trans)
            local login = ctx.Logins:Add{
                id = id,
                now:AddSeconds(mod). login = Date,
            }

            ctx:SaveChanges()
        end)
    end)
end

The default converter of the framework for date type registration, and the way in which it is registered, is.

__DataField__.Converter[Date] = {
    fromvalue = Date,
    tovalue = Date.ToString,
}

However, the default format is not provided here because the default conversion format for Date is %Y-%m-%d %X, which is identical to the date type of Mysql, and if you need to use a different format, you can also override the original settings yourself, without specifying the following for each field map.

__DataField__.Converter[Date] = {
    fromvalue = Date,
    tovalue = Date.ToString,
    format = "%Y-%m-%d",
}

Usually only the date type has a formatting requirement. If there is a need for separate formatting for only a few field mappings, it can be declared as:

__DataField__{ notnull = true, format = "%Y-%m-%d" }
property "login" { type = Date }

This overwrites the default format, but not the default converter.

The corresponding converter for Boolean is:

__DataField__.Converter[Boolean] = {
    fromvalue = function(value)
        return tonumber(value) == 1 or false
    end,
    tovalue = function(object)
        return object and 1 or 0
    end,
}

So, it's actually 1 for true and 0 for false. But when you use it, you just need to make a boolean decision like if not entity.deleted then end, and don't worry about the database implementation.

DBNull

Usually different database access library implementation, will define some special constants, used to represent the null value of the placeholder (DBNull), in our code to use, need to be converted to nil use, at the same time, we modify the value of the data entity object property, if given nil value, corresponding to the database value should also be DBNull, so that the system can determine that there was a value here, now need to set to null.

The web framework provided by PLoop is only an abstract implementation, and the specific application is not consistent across platforms, so the platform needs to register the DBNull placeholder value that it uses into the framework, and the framework can automatically handle it, for example, corresponding to the ngx.null of [Openresty][], the framework needs to provide the following registration.

System.Data.AddNullValue(ngx.null) -- for OpenResty

After that, we can use ParseValue to filter out ngx.null and convert it to nil to avoid decision errors (especially if decisions are made).

value = System.Data.ParseValue(value)

In addition, the system also provides an attribute value that is used when you need to specify the value of DBNull:

entity.name = System.Data.DBNull

But in reality, neither ParseValue nor DBNull need to be used directly in the code, and __DataField__ already does the processing itself when encapsulating access to the property, so it just needs to be ParseValue or DBNull.

entity.name = nil

This is in line with Lua's development practices, so DBNull is usually used at the bottom level and has nothing to do with the application developer.

This corresponds to [Openresty][], which has already been fully implemented in [NgxLua][] (currently only the Mysql database is covered, the rest can be referred to the implementation).

Data view

To summarize the above processing, database context objects provide basic query functionality, and data collections provide the ability to save query results to objects of the data entity class, through which we can access all data with business logic types.

However, data collections only support single table queries and are usually used more for add/drop operations. If it is just for querying, and there is a need for multi-table queries, we can only use the base query provided by the database context object, and the results will need to be converted from the data ourselves.

On the other hand, we also need to minimize the number of SQL statements in the business code and avoid having to make changes in multiple files. This is where we can use the Data View class, which is close to the Data Entity class, and can access the read data using the specified type.

The data view class only cares about querying the data, so its objects are not tracked by the framework and modifications are not committed to the database, so it is relatively free to handle the results of multiple table queries.

Usually we bind SQL statements for the data view class, so that when using it, only the parameters need to be passed, which also ensures that, its structure and SQL definition in the same place, when modified, do not have to be modified in each application code. Look at an example below.

__DataView__[[select id, name, telno from user where id > %d order by id limit %d]]
class "UserPageView" (function(_ENV))
        __DataField__()
        property "id" { type = NaturalNumber }

        __DataField__()
        property "name" { type = String }

        __DataField__()
        property "telno" { type = String }
end)

Similar to defining the Data Entity class, we can also use the __DataField__ feature to map attributes to the query result field, and the type of the attribute will help us convert the result value (only when reading, which is an optimization process).

The difference is that we use __DataView__ to declare the DataView class and bind to a SQL statement instead of a database table, which uses %d, %s, etc. as placeholders in the SQL statement, which are then replaced by parameters for the query.

The SQL statement above is used for paging and is used similarly to.

-- querying the top 100 users
local lst = ctx:QueryView(UserPageView, 0, 100)

The only thing you need to do is to pass a parameter, and the returned list element is a UserPageView object. Alternatively, you can construct your own new SQL statement without using the default SQL, just make sure the structure of the query fields is consistent.

local lst = ctx:QueryAsView(UserPageView, [[select id, name, telno from user where id < %d order by id desc limit %d]], 999, 100)

However, this is not usually recommended, and it is easier to maintain view classes bound to SQL statements.

Both the Data Entity class and the Data View class are serializable, and in a web framework, the returned list can be returned directly to the client as JSON (the list itself is serializable).

Data object

A data object is different from a data entity, which is the result of a row of data corresponding to a database table. A data object is an object formed by combining data from database tables, which can be composed of data from multiple tables:

__DataContext__()
class "UserDataContext" (function(_ENV))
    __DataTable__ {
        name = "user",

        indexes = {
            { fields = { "id" }, primary = true },
            { fields = { "name" }, unique = true },
            { fields = { "telno" }, unique = true },
        }
    }
    class "User" (function(_ENV))
        __DataField__{ autoincr = true }
        property "id" { type = NaturalNumber }

        __DataField__{ notnull = true }
        property "name" { type = String }

        __DataField__{ notnull = true }
        property "telno" { type = String }
    end)

    __DataObject__{ index = { "uid" } }
    class "UserObject" (function(_ENV))
        property "uid" { type = NaturalNumber }

        property "name" { type = String }

        function __ctor(self, context, uid)
            local user = context.Users:Query{ id = uid }:First()
            if not user then throw("The user doesn't exist") end

            self.uid = user.id
            self.name = user.name
        end
    end)
end)

In the above example, the UserObject data object class is defined, the primary key is declared to be uid using the __DataObject__ feature, and a constructor function is defined that accepts context, which is the database context object, and all primary keys (note that multiple primary keys can be defined in order).

In the constructor function, if the query result does not exist, an error can be thrown through throw, and the system can confirm that the data does not exist, and the error content will be discarded. As you can see, the contents of the data object are actually arbitrarily organized, and the data object is serializable by default.

The following is an example of a query:

with(UserDataContext())(function(ctx))
    local object = ctx.UserObjects:Query(3) -- SELECT id,name,telno FROM user WHERE id=3
end)

Similar to the data entity class, the system will automatically build a new property to the database context (the default is to add s), the value of this property, is a collection of data object classes, you can use the Query method, according to the order of the primary key can be passed in, because the primary key is unique, so, the return result is only a data object, not a list of data objects, this and the data entity class is different.

In addition, modifications to data objects are not fed back into the database; they are just ordinary objects built from the database.

Interfacing to specific databases

The above database contexts, data collections, data entity classes, data view classes and field-to-attribute mappings are complete abstract data entity frameworks. They provide a complete set of data models and development frameworks. However, they are based on database operations that are abstract and are not specific to the database or platform used.

In order to interface to the actual project, we need to provide an implementation of the corresponding database, and to do so, the framework specifies the interfaces and details that need to be implemented (using [NgxLua][] and Mysql databases can be skipped).

-- Database connection status
enum "ConnectionState" {
    Closed = 0, -- closed
    Open = 1, -- Opened
    Connecting = 2, -- Connecting now.
    Executing = 3, -- executing.
    Fetching = 4, -- Searching.
}

-- segregation of transactions
enum "TransactionIsolation" {
    REPEATABLE_READ = 0,
    READ_UNCOMMITTED = 1,
    READ_COMMITTED = 2,
    SERIALIZABLE = 3,
}

-- SQL builder
interface "ISqlBuilder" (function(_ENV))
    --- Set the group of fields to be selected.
    -- @param fields A list of field names (also available on regular tables) or a string containing field names.
    -- @return self must return itself in order to complete the chain, as follows
    __Abstract__() function Select(self, fields) return self end

    --- Query and lock result sets
    -- @return self
    __Abstract__() function Lock(self) return self end

    --- Set the fields and values to be updated, and generate update statements
    -- @param map A table of mapped fields and values.
    -- @return self
    __Abstract__() function Update(self, map) return self end

    --- Set delete flag, generate delete statement
    -- @return self
    __Abstract__() function Delete(self) return self end

    --- Set the fields and values to be used for insertion and generate the insertion statement.
    -- @param map A table of mapped fields and values.
    -- @return self
    __Abstract__() function Insert(self, map) return self end

    --- Set the corresponding database table
    -- @param name database table name
    -- alias of @param alias table
    -- @return self
    __Abstract__() function From(self, name, alias) return self end

    --- Set Search Criteria
    -- @param condition, either a table for exact query, or a formatted string as a condition (without where, only decision statement).
    -- @return self
    __Abstract__() function Where(self, condition, ...) return self end

    --- Set the sort field
    -- @param name field name
    -- @param desc whether reverse order
    -- @return self
    __Abstract__() function OrderBy(self, field, desc) return self end

    --- Generate SQL, which is the final process after the above operation is performed.
    -- @return sql generated SQL statement
    __Abstract__() function ToSql(self) end
end)

--- Database connections, managing connections and completing operations
interface "IDbConnection" (function(_ENV))
    extend "IAutoClose"

    -----------------------------------------------------------
    -- abstract property --
    -----------------------------------------------------------
    --- Connection status
    __Abstract__() property "State" { type = ConnectionState, default = ConnectionState.Closed }

    --- a class that implements ISqlBuilder to generate the actual query statement.
    -- note that this class is not used by the IDbConnection object, but by the framework.
    -- Defined here to bundle operations with the database into a single class offering.
    __Abstract__() property "SqlBuilder" { type = -ISqlBuilder }

    -----------------------------------------------------------
    -- abstract methods --
    -----------------------------------------------------------
    --- return a new database transaction
    -- @param isolation transaction isolation level
    __Abstract__() function NewTransaction(self, isolation) end

    --- sends a query request and returns the result set
    -- @param sql Formatting Sql Query Statements
    -- @param ...   Query Parameters
    -- @return list Result set list
    __Abstract__() function Query(self, sql, ...) function

    --- sends an insertion request and returns the self-increasing id, if it exists.
    -- @param sql Formatting Sql Insert Statements
    -- @param ...   insert parameter
    -- @return id self-added id
    __Abstract__() function Insert(self, sql, ...) insert(self, sql, ...)

    --- Send update request
    -- @param sql formatted Sql update statement
    -- @param ...   Update parameters
    __Abstract__() function Update(self, sql, ...) function Update(self, sql, ...)

    --- Send Delete Request
    -- @param sql formatted Sql delete statement
    -- @param ...   Delete the parameter
    __Abstract__() function Delete(self, sql, ...) delete(self, sql, ...)

    --- Execute arbitrary requests and return results directly to the database for processing
    -- @param sql Formatting Sql Statements
    -- @param ...   Parametric
    -- @return ...   All databases return results
    __Abstract__() function Execute(self, sql, ...) Execute(self, sql, ...)
end)

--- Database transaction interface for performing a set of operations that can be rolled back.
interface "IDbTransaction" (function(_ENV))
    extend "IAutoClose" - Although this interface is extended, there is no need to implement the Open/Close method yourself, this interface is already implemented

    -----------------------------------------------------------
    -- abstract property --
    -----------------------------------------------------------
    --- The database link of the transaction, usually set in NewTransaction using IDbConnection.
    __Abstract__() property "Connection" { type = IDbConnection }

    --- The isolation level of the transaction, which should also be set in NewTransaction.
    __Abstract__() property "Isolation" { type = TransactionIsolation, default = TransactionIsolation.REPEATABLE_READ }

    -----------------------------------------------------------
    -- abstract methods --
    -----------------------------------------------------------
    --- Commencement of business
    __Abstract__() function Begin(self) end

    --- Recognition services
    __Abstract__() function Commit(self) end

    --- Roll-back services
    __Abstract__() function Rollback(self) end
end)

Note that the field-data table involved above handles data that has been converted to the data format corresponding to the field type, and has nothing to do with the attribute type of the data entity class, this piece is all more low-level processing and does not involve the class above.

Here's a look at a simple implementation.

require "PLoop.System.Data"

PLoop(function(_ENV)
    import "System.Data"

    class "MySQLConnection" {}

    __Sealed__() class "MySQLTransaction" (function(_ENV)
        extend "IDbTransaction"

        -----------------------------------------------------------
        --                         method                        --
        -----------------------------------------------------------
        function Begin(self)
            self.Connection:Execute("TRANSACTION BEGIN")
        end

        function Commit(self)
            self.Connection:Execute("TRANSACTION COMMIT")
        end

        function Rollback(self)
            self.Connection:Execute("TRANSACTION ROLLBACK")
        end

        -----------------------------------------------------------
        --                      constructor                      --
        -----------------------------------------------------------
        __Arguments__{ MySQLConnection, TransactionIsolation/TransactionIsolation.REPEATABLE_READ }
        function __ctor(self, conn, isolation)
            self.Connection = conn
            self.Isolation  = isolation
        end
    end)

    -- Sql builder
    __Sealed__() class "MySQLBuilder" (function(_ENV)
        extend "ISqlBuilder"

        export {
            SQLTYPE_SELECT      = 1,
            SQLTYPE_UPDATE      = 2,
            SQLTYPE_DELETE      = 3,
            SQLTYPE_INSERT      = 4,

            FIELD_SQLTYPE       = 1,
            FIELD_SELECT        = 2,
            FIELD_UPDATE        = 3,
            FIELD_INSERT        = 4,
            FIELD_FROM          = 5,
            FIELD_WHERE         = 6,
            FIELD_ORDERBY       = 7,

            quote_sql_str       = function(str) return ("%q"):format(str) end,
            type                = type,
            tblconcat           = table.concat,
            tostring            = tostring,
            pairs               = pairs,

            DBNull              = System.Data.DBNull,
        }

        local function escape(val)
            if val == DBNull then return "NULL" end

            local vtype         = type(val)

            if vtype == "boolean" then
                return val and "1" or "0"
            elseif vtype == "string" then
                return quote_sql_str(val)
            else
                return tostring(val)
            end
        end

        -----------------------------------------------------------
        --                         method                        --
        -----------------------------------------------------------
        function Select(self, fields)
            self[FIELD_SQLTYPE] = SQLTYPE_SELECT

            if type(fields) == "table" then
                fields          = tblconcat(fields, ",")
            else
                fields          = tostring(fields)
            end

            self[FIELD_SELECT]  = fields ~= "" and fields or nil

            return self
        end

        function Insert(self, map)
            if type(map) == "table" then
                self[FIELD_SQLTYPE] = SQLTYPE_INSERT

                local fields    = {}
                local values    = {}
                local index     = 1

                for fld, val in pairs(map) do
                    fields[index] = fld
                    values[index] = escape(val)
                    index       = index + 1
                end

                fields          = tblconcat(fields, ",")
                values          = tblconcat(values, ",")

                if fields ~= "" then
                    self[FIELD_SELECT] = fields
                    self[FIELD_INSERT] = values
                end
            end

            return self
        end

        function Update(self, map)
            self[FIELD_SQLTYPE] = SQLTYPE_UPDATE

            if type(map) == "table" then
                local temp      = {}
                local index     = 1
                local first     = true

                for fld, val in pairs(map) do
                    if not first then
                        temp[index] = ","
                        index   = index + 1
                    end
                    first       = false

                    temp[index] = fld
                    index       = index + 1

                    temp[index] = "="
                    index       = index + 1

                    temp[index] = escape(val)
                    index       = index + 1
                end

                map             = tblconcat(temp, "")
            else
                map             = tostring(map)
            end

            self[FIELD_UPDATE]  = map ~= "" and map or nil

            return self
        end

        function Delete(self)
            self[FIELD_SQLTYPE] = SQLTYPE_DELETE

            return self
        end

       function From(self, name)
            self[FIELD_FROM]    = name
            return self
        end

        function Where(self, condition)
            if type(condition) == "table" then
                local temp      = {}
                local index     = 1
                local first     = true

                for fld, val in pairs(condition) do
                    if not first then
                        temp[index] = " AND "
                        index   = index + 1
                    end
                    first       = false

                    temp[index] = fld
                    index       = index + 1

                    temp[index] = "="
                    index       = index + 1

                    temp[index] = escape(val)
                    index       = index + 1
                end

                condition       = tblconcat(temp, "")
            else
                condition       = tostring(condition)
            end

            self[FIELD_WHERE]   = condition ~= "" and condition or nil

            return self
        end

        function OrderBy(self, field, desc)
            if desc then field  = field .. " DESC" end

            if self[FIELD_ORDERBY] then
                self[FIELD_ORDERBY] = self[FIELD_ORDERBY] .. "," .. field
            else
                self[FIELD_ORDERBY] = field
            end

            return self
        end

        function ToSql(self)
            local temp          = {}

            local sqltype       = self[FIELD_SQLTYPE]

            if not sqltype then return end

            if sqltype == SQLTYPE_SELECT then
                temp[1]         = "SELECT"
                temp[2]         = self[FIELD_SELECT] or "*"
                temp[3]         = "FROM"

                if not self[FIELD_FROM] then return end
                temp[4]         = self[FIELD_FROM]

                if self[FIELD_WHERE] then
                    temp[5]     = "WHERE"
                    temp[6]     = self[FIELD_WHERE]
                else
                    temp[5]     = ""
                    temp[6]     = ""
                end

                if self[FIELD_ORDERBY] then
                    temp[7]     = "ORDER BY"
                    temp[8]     = self[FIELD_ORDERBY]
                else
                    temp[7]     = ""
                    temp[8]     = ""
                end
            elseif sqltype == SQLTYPE_UPDATE then
                temp[1]         = "UPDATE"

                if not self[FIELD_FROM] then return end
                temp[2]         = self[FIELD_FROM]
                temp[3]         = "SET"

                if not self[FIELD_UPDATE] then return end
                temp[4]         = self[FIELD_UPDATE]

                if self[FIELD_WHERE] then
                    temp[5]     = "WHERE"
                    temp[6]     = self[FIELD_WHERE]
                else
                    temp[5]     = ""
                    temp[6]     = ""
                end
            elseif sqltype == SQLTYPE_DELETE then
                temp[1]         = "DELETE FROM"

                if not self[FIELD_FROM] then return end
                temp[2]         = self[FIELD_FROM]

                if self[FIELD_WHERE] then
                    temp[3]     = "WHERE"
                    temp[4]     = self[FIELD_WHERE]
                else
                    return
                end
            elseif sqltype == SQLTYPE_INSERT then
                temp[1]         = "INSERT INTO"

                if not self[FIELD_FROM] then return end
                temp[2]         = self[FIELD_FROM]

                if not self[FIELD_SELECT] then return end

                temp[3]         = "("
                temp[4]         = self[FIELD_SELECT]
                temp[5]         = ") VALUES ("

                if not self[FIELD_INSERT] then return end

                temp[6]         = self[FIELD_INSERT]
                temp[7]         = ")"
            end

            return tblconcat(temp, " ")
        end

        -----------------------------------------------------------
        --                      constructor                      --
        -----------------------------------------------------------
        function __new(self)
            return {
                [FIELD_SQLTYPE] = false,
                [FIELD_SELECT]  = false,
                [FIELD_UPDATE]  = false,
                [FIELD_INSERT]  = false,
                [FIELD_FROM]    = false,
                [FIELD_WHERE]   = false,
                [FIELD_ORDERBY] = false,
            }, true
        end
    end)

    -- The connection
    __Sealed__() class "MySQLConnection" (function(_ENV)
        extend "IDbConnection"

        property "SqlBuilder" { set = false, default = MySQLBuilder }

        -----------------------------------------------------------
        --                        method                        --
        -----------------------------------------------------------
        __Arguments__{ TransactionIsolation/TransactionIsolation.REPEATABLE_READ }
        function NewTransaction(self, isolation)
            return MySQLTransaction(self, isolation)
        end

        function Close(self)
            print("[Connection][Close]")
        end

        function Open(self)
            print("[Connection][Open]")
        end

        function Query(self, sql)
            print("[SQL][Query]", sql)
        end

        function Insert(self, sql)
            print("[SQL][Insert]", sql)
        end

        function Update(self, sql)
            print("[SQL][Update]", sql)
        end

        function Delete(self, sql)
            print("[SQL][Delete]", sql)
        end

        function Execute(self, sql)
            print("[SQL][Execute]", sql)
        end
    end)

    -- Test 
    __DataContext__()
    class "UserDataContext" (function(_ENV)
        function __ctor(self)
            self.Connection = MySQLConnection()
        end

        __DataTable__ {
            name         = "user",

            indexes      = {
                { fields = { "id" },   primary = true },
                { fields = { "name" },  unique = true },
                { fields = { "telno" },  unique = true },
            }
        }
        class "User" (function(_ENV)
            __DataField__{ autoincr = true }
            property "id"           { type = NaturalNumber }

            __DataField__{ notnull = true }
            property "name"         { type = String }

            __DataField__{ notnull = true }
            property "telno"        { type = String }
        end)
    end)

    -- [Connection][Open]
    with(UserDataContext())(function(ctx)
        -- [SQL][Query] SELECT id, name, telno FROM user WHERE id=1
        local e = ctx.Users:Query{ id = 1 }:First()

        -- [SQL][Execute]   TRANSACTION BEGIN
        with(ctx.Transaction)(function(tran)
            e = ctx.Users:Add{ name = "Ann" }

            -- [SQL][Insert]    INSERT INTO user ( name ) VALUES ( "Ann" )
            ctx:SaveChanges()
        end)
        -- [SQL][Execute]  TRANSACTION COMMIT
    end)
    -- [Connection][Close]
end)

Caching data libraries

To solve this problem, we usually need to use memory-based caching databases like redis or memcache, which usually keep only key-value pairs and can manage the life cycle of key-value pairs by specifying a timeout.

The cache database is relatively more complex, and the relational database is designed to be used in the same way, but the cache database is relatively free in terms of implementation and data storage.

System.Data.ICache

The ICache interface declares the methods that must be implemented for the cache class.

interface "System.Data.ICache" (function(_ENV))
    extend "System.IAutoClose"

    -----------------------------------------------------------
    -- abstract methods --
    -----------------------------------------------------------
    --- try to write a key-value pair to the cache database, if the key already exists, do not do anything, if set successfully, need to return true
    -- @param key
    -- @param value
    -- @param expiretime Expire time, need to be able to handle remaining seconds or specific Date
    -- @return success
    __Abstract__() function TrySet(self, key, value, expiretime) end

    --- Writing key-value pairs to a cached database
    -- @param key
    -- @param value
    -- @param expiretime Expire time, need to be able to handle remaining seconds or specific Date
    __Abstract__() function Set(self, key, value, expiretime) end

    --- Refresh expiration time for specific keys
    -- @param key
    -- @param expiretime Expire time, need to be able to handle remaining seconds or specific Date
    __Abstract__() function SetExpireTime(self, key, expiretime) end

    --- Get the value of the specified key
    -- @param key
    -- @return value for the corresponding value.
    __Abstract__() function Get(self, key) end

    --- presence of key
    -- @param key
    -- @return existed or not in the cache database.
    __Abstract__() function Exist(self, key) end

    --- Delete the specified key
    -- @param key
    __Abstract__() function Delete(self, key) end
end)

The incoming values may be serializable objects, since the system has no knowledge of how the cache is implemented (we can even use Lua's table as a cache), so the serialization/deserialization operations need to be handled by the cache class itself.

Two implementations of the cache class can be found in [NgxLua][]: NgxLua.Redis corresponds to the Redis cache database, and NgxLua.ShareDict corresponds to the shared table provided by [Openresty][].

The following is an example of its use.

require "NgxLua"

PLoop(function(_ENV)
    -- Open Server: 127.0.0.1:6379
    with(NgxLua.Redis{ host = "127.0.0.1", port = 6379 })(function(redis)
        -- add a user record and keep it for 10 minutes. Note that Redis has implemented serialization and deserialization, so you can save objects and tables directly.
        redis:Set("user_100", { id = 100, name = "Ann" }, Date.Now:AddMinutes(10))
    end)
end)

Joint processing of cache and database

In the actual system, when we read data entity objects, we save the objects we get from the database into the cache, so that the next time we get them, we can get them directly from the cache, which saves a lot of time compared to reading them from the database.

When querying a data object, we need to check the cache first when the object is based on the primary key value, then fetch it from the database when it does not exist, and then save it in the cache for the next use.

This process is actually very common, and to simplify the processing, the system provides a joint cache and database processing system, we can easily bind joint processing for data entity class by using two features __DataContextCache__ and __DataCacheEnable__.

A simple example :

-- we need to use __DataContextCache__ to specify the cache database for the database context class
-- the processing class, which needs to implement a cache connection in the constructors, because passing from the database context class to the
-- it's difficult to cache connection strings and such.
__DataContext__() __DataContextCache__(RedisCache)
class "UserDataContext" (function(_ENV))
    -- Database context objects also need to manage their own database connections.
    function __ctor(self)
        self.Connection = MySQLConnection()
    end

    -- For data entity classes that require the automatic caching feature to be enabled, we must use the
    -- The __DataCacheEnable__ feature provides timeout settings, etc. This timeout
    -- represents the amount of time the data entity class object is kept in the cache, and note that accessing the
    -- asks the data object in the cache to refresh the timeout.
    __DataTable__ {
        name = "user",

        indexes = {
            { fields = { "id" }, primary = true },
            { fields = { "name" }, unique = true },
            { fields = { "telno" }, unique = true },
        }
    }
    __DataCacheEnable__{ timeout = 1 * 24 * 60 * 60 }
    class "User" (function(_ENV))
        __DataField__{ autoincr = true }
        property "id" { type = NaturalNumber }

        __DataField__{ notnull = true }
        property "name" { type = String }

        __DataField__{ notnull = true }
        property "telno" { type = String }
    end)

    -- automatic caching can also be enabled for data object classes, but not the same as data entity classes.
    -- must specify the dependent data entity class with depends, where User is defined above.
    the name of the database entity class of -- whose corresponding table is a property mapping relationship, such that
    -- when the data in the User table is modified, we can delete the corresponding data from the cache.
    -- caches new data for the next reading
    __DataObject__{ index = { "uid" } }
    __DataCacheEnable__{ depends = { User = { id = "uid" } }, timeout = 30 * 24 * 60 * 60 }
    class "UserObject" (function(_ENV))
        property "uid" { type = NaturalNumber }

        property "name" { type = String }

        function __ctor(self, context, uid)
            local user = context.Users:Query{ id = uid }:First()
            if not user then throw("The user doesn't exist") end

            self.uid = user.id
            self.name = user.name
        end
    end)
end)

Use it similarly :

local uid = 3
local object = UserDataContext.UserCache():Get(3)
local object = UserDataContext.UserCache():Get{ name = "Ann" }
local object = UserDataContext.UserObjectCache():Get(3)

The system will create a new class (class name + Cache) for the class specified by __DataCacheEnable__, when needed, we can create an object of this class, use the Get method, pass the primary key value. The system will first query the cache, and then query the database to get the corresponding data entity or data object.

For unique indexes, you can also use the value of the index passed in by table to query. Of course, the data object class does not have an index and cannot be queried in this way.

When the User table data is modified and the transaction ends, the system automatically deletes the cached data (both the User itself and the UserObject that depends on it) so that the next time it is accessed, the new data will be fetched and cached in the cached database. This process is completely automatic and requires no developer intervention.

Note that not all data entities and data object classes should be automatically cached, depending on the context of the application.

A full test example:

require "PLoop.System.Data"

PLoop(function(_ENV)
    import "System.Data"

    class "MySQLConnection" {}

    __Sealed__() class "MySQLTransaction" (function(_ENV)
        extend "IDbTransaction"

        -----------------------------------------------------------
        --                         method                        --
        -----------------------------------------------------------
        function Begin(self)
            self.Connection:Execute("TRANSACTION BEGIN")
        end

        function Commit(self)
            self.Connection:Execute("TRANSACTION COMMIT")
        end

        function Rollback(self)
            self.Connection:Execute("TRANSACTION ROLLBACK")
        end

        -----------------------------------------------------------
        --                      constructor                      --
        -----------------------------------------------------------
        __Arguments__{ MySQLConnection, TransactionIsolation/TransactionIsolation.REPEATABLE_READ }
        function __ctor(self, conn, isolation)
            self.Connection = conn
            self.Isolation  = isolation
        end
    end)

    -- Sql builder
    __Sealed__() class "MySQLBuilder" (function(_ENV)
        extend "ISqlBuilder"

        export {
            SQLTYPE_SELECT      = 1,
            SQLTYPE_UPDATE      = 2,
            SQLTYPE_DELETE      = 3,
            SQLTYPE_INSERT      = 4,

            FIELD_SQLTYPE       = 1,
            FIELD_SELECT        = 2,
            FIELD_UPDATE        = 3,
            FIELD_INSERT        = 4,
            FIELD_FROM          = 5,
            FIELD_WHERE         = 6,
            FIELD_ORDERBY       = 7,

            quote_sql_str       = function(str) return ("%q"):format(str) end,
            type                = type,
            tblconcat           = table.concat,
            tostring            = tostring,
            pairs               = pairs,

            DBNull              = System.Data.DBNull,
        }

        local function escape(val)
            if val == DBNull then return "NULL" end

            local vtype         = type(val)

            if vtype == "boolean" then
                return val and "1" or "0"
            elseif vtype == "string" then
                return quote_sql_str(val)
            else
                return tostring(val)
            end
        end

        -----------------------------------------------------------
        --                         method                        --
        -----------------------------------------------------------
        function Select(self, fields)
            self[FIELD_SQLTYPE] = SQLTYPE_SELECT

            if type(fields) == "table" then
                fields          = tblconcat(fields, ",")
            else
                fields          = tostring(fields)
            end

            self[FIELD_SELECT]  = fields ~= "" and fields or nil

            return self
        end

        function Insert(self, map)
            if type(map) == "table" then
                self[FIELD_SQLTYPE] = SQLTYPE_INSERT

                local fields    = {}
                local values    = {}
                local index     = 1

                for fld, val in pairs(map) do
                    fields[index] = fld
                    values[index] = escape(val)
                    index       = index + 1
                end

                fields          = tblconcat(fields, ",")
                values          = tblconcat(values, ",")

                if fields ~= "" then
                    self[FIELD_SELECT] = fields
                    self[FIELD_INSERT] = values
                end
            end

            return self
        end

        function Update(self, map)
            self[FIELD_SQLTYPE] = SQLTYPE_UPDATE

            if type(map) == "table" then
                local temp      = {}
                local index     = 1
                local first     = true

                for fld, val in pairs(map) do
                    if not first then
                        temp[index] = ","
                        index   = index + 1
                    end
                    first       = false

                    temp[index] = fld
                    index       = index + 1

                    temp[index] = "="
                    index       = index + 1

                    temp[index] = escape(val)
                    index       = index + 1
                end

                map             = tblconcat(temp, "")
            else
                map             = tostring(map)
            end

            self[FIELD_UPDATE]  = map ~= "" and map or nil

            return self
        end

        function Delete(self)
            self[FIELD_SQLTYPE] = SQLTYPE_DELETE

            return self
        end

       function From(self, name)
            self[FIELD_FROM]    = name
            return self
        end

        function Where(self, condition)
            if type(condition) == "table" then
                local temp      = {}
                local index     = 1
                local first     = true

                for fld, val in pairs(condition) do
                    if not first then
                        temp[index] = " AND "
                        index   = index + 1
                    end
                    first       = false

                    temp[index] = fld
                    index       = index + 1

                    temp[index] = "="
                    index       = index + 1

                    temp[index] = escape(val)
                    index       = index + 1
                end

                condition       = tblconcat(temp, "")
            else
                condition       = tostring(condition)
            end

            self[FIELD_WHERE]   = condition ~= "" and condition or nil

            return self
        end

        function OrderBy(self, field, desc)
            if desc then field  = field .. " DESC" end

            if self[FIELD_ORDERBY] then
                self[FIELD_ORDERBY] = self[FIELD_ORDERBY] .. "," .. field
            else
                self[FIELD_ORDERBY] = field
            end

            return self
        end

        function ToSql(self)
            local temp          = {}

            local sqltype       = self[FIELD_SQLTYPE]

            if not sqltype then return end

            if sqltype == SQLTYPE_SELECT then
                temp[1]         = "SELECT"
                temp[2]         = self[FIELD_SELECT] or "*"
                temp[3]         = "FROM"

                if not self[FIELD_FROM] then return end
                temp[4]         = self[FIELD_FROM]

                if self[FIELD_WHERE] then
                    temp[5]     = "WHERE"
                    temp[6]     = self[FIELD_WHERE]
                else
                    temp[5]     = ""
                    temp[6]     = ""
                end

                if self[FIELD_ORDERBY] then
                    temp[7]     = "ORDER BY"
                    temp[8]     = self[FIELD_ORDERBY]
                else
                    temp[7]     = ""
                    temp[8]     = ""
                end
            elseif sqltype == SQLTYPE_UPDATE then
                temp[1]         = "UPDATE"

                if not self[FIELD_FROM] then return end
                temp[2]         = self[FIELD_FROM]
                temp[3]         = "SET"

                if not self[FIELD_UPDATE] then return end
                temp[4]         = self[FIELD_UPDATE]

                if self[FIELD_WHERE] then
                    temp[5]     = "WHERE"
                    temp[6]     = self[FIELD_WHERE]
                else
                    temp[5]     = ""
                    temp[6]     = ""
                end
            elseif sqltype == SQLTYPE_DELETE then
                temp[1]         = "DELETE FROM"

                if not self[FIELD_FROM] then return end
                temp[2]         = self[FIELD_FROM]

                if self[FIELD_WHERE] then
                    temp[3]     = "WHERE"
                    temp[4]     = self[FIELD_WHERE]
                else
                    return
                end
            elseif sqltype == SQLTYPE_INSERT then
                temp[1]         = "INSERT INTO"

                if not self[FIELD_FROM] then return end
                temp[2]         = self[FIELD_FROM]

                if not self[FIELD_SELECT] then return end

                temp[3]         = "("
                temp[4]         = self[FIELD_SELECT]
                temp[5]         = ") VALUES ("

                if not self[FIELD_INSERT] then return end

                temp[6]         = self[FIELD_INSERT]
                temp[7]         = ")"
            end

            return tblconcat(temp, " ")
        end

        -----------------------------------------------------------
        --                      constructor                      --
        -----------------------------------------------------------
        function __new(self)
            return {
                [FIELD_SQLTYPE] = false,
                [FIELD_SELECT]  = false,
                [FIELD_UPDATE]  = false,
                [FIELD_INSERT]  = false,
                [FIELD_FROM]    = false,
                [FIELD_WHERE]   = false,
                [FIELD_ORDERBY] = false,
            }, true
        end
    end)

    __Sealed__() class "MySQLConnection" (function(_ENV)
        extend "IDbConnection"

        property "SqlBuilder" { set = false, default = MySQLBuilder }

        -----------------------------------------------------------
        --                        method                        --
        -----------------------------------------------------------
        __Arguments__{ TransactionIsolation/TransactionIsolation.REPEATABLE_READ }
        function NewTransaction(self, isolation)
            return MySQLTransaction(self, isolation)
        end

        function Close(self)
            print("[Connection][Close]")
        end

        function Open(self)
            print("[Connection][Open]")
        end

        function Query(self, sql)
            print("[SQL][Query]", sql)
        end

        function Insert(self, sql)
            print("[SQL][Insert]", sql)
        end

        function Update(self, sql)
            print("[SQL][Update]", sql)
        end

        function Delete(self, sql)
            print("[SQL][Delete]", sql)
        end

        function Execute(self, sql)
            print("[SQL][Execute]", sql)
        end
    end)

    __Sealed__() class "RedisCache" (function(_ENV)
        extend "ICache"

        -----------------------------------------------------------
        --                        method                         --
        -----------------------------------------------------------
        --- Try sets the the value with non-exist key to the cache, return true if success
        function TrySet(self, ...) print("TrySet", ...) end

        --- Sets key-value pair to the cache
        function Set(self, ...) print("Set", ...) end

        --- Sets the expire time for a key
        function SetExpireTime(self, ...) print("SetExpireTime", ...) end

        --- Gets value for a key
        function Get(self, ...) print("Get", ...) end

        --- Whether the key existed in the cache
        function Exist(self, ...) print("Exist", ...) end

        --- Delete a key from the cache
        function Delete(self, ...) print("Delete", ...) end
    end)

    -- The test 
    __DataContext__() __DataContextCache__(RedisCache)
    class "UserDataContext" (function(_ENV)
        function __ctor(self)
            self.Connection = MySQLConnection()
        end

        __DataTable__ {
            name         = "user",

            indexes      = {
                { fields = { "id" },   primary = true },
                { fields = { "name" },  unique = true },
                { fields = { "telno" },  unique = true },
            }
        } __DataCacheEnable__{ timeout = 1 * 24 * 60 * 60 }
        class "User" (function(_ENV)
            __DataField__{ autoincr = true }
            property "id"           { type = NaturalNumber }

            __DataField__{ notnull = true }
            property "name"         { type = String }

            __DataField__{ notnull = true }
            property "telno"        { type = String }
        end)

        __DataObject__{ index = { "uid" } }
        __DataCacheEnable__{ depends = { User = { id = "uid" } }, timeout = 30 * 24 * 60 * 60 }
        class "UserObject" (function(_ENV)
            property "uid"           { type = NaturalNumber }

            property "name"         { type = String }

            function __ctor(self, context, id)
                local user      = context.Users:Query{ id = id }:First()
                if not user then throw("The user doesn't exist") end

                self.id         = user.id
                self.name       = user.name
            end
        end)
    end)

    -- [Connection][Open]
    with(UserDataContext())(function(ctx)
        -- [SQL][Query] SELECT id, name, telno FROM user WHERE id=1
        local e = ctx.Users:Query{ id = 1 }:First()

        -- [SQL][Execute]   TRANSACTION BEGIN
        with(ctx.Transaction)(function(tran)
            e = ctx.Users:Add{ id = 1, name = "Ann" }

            -- [SQL][Insert]    INSERT INTO user ( name ) VALUES ( "Ann" )
            ctx:SaveChanges()
        end)
        -- [SQL][Execute]  TRANSACTION COMMIT
        -- Delete   PLDC:UserDataContext.User:1
        -- Delete   PLDC:UserDataContext.User:2:Ann
        -- Delete   PLDC:UserDataContext.UserObject:1
    end)
    -- [Connection][Close]
end)

It can be seen that when the transaction is acknowledged, the system automatically initiates a cache delete operation.