Complex, distributed computer-based applications today have to address two key requirements. First is data sharing with system enforcement of data integrity. Second is application development and modification by end users. Most application owners recognise that existing development methodologies simply don’t satisfy these.
Systems should support a realistic model of the application (what’s to be done). And for such a model to drive a real process, a well-defined system architecture (hardware and software) and model components are essential.
Since the major purpose of any industrial enterprise is to produce things of value from other things, it can be perceived as a set of processes – actions (or functions) characterised by a change over time. Taking this process perception, one can define processes as a set of actions or functions that convert real things (the process input) to real things of enhanced value (output).
Thus, an application solution can be viewed as an operational model of the processes. It’s now necessary to identify appropriate process model constructs – as well as define an infrastructure for them to be operational while connected to the process(s). Primary components (or constructs) of a model are: a thing (or things); an action; the means to perform it; associated parameters; the action start time; and its duration.
These are the model constructs. First are material things (inputs to, and outputs from, the process; parameters; fixtures that support the process). Second are functional things (actions and functions). Third are conceptual things (time, start event, completion event). There is another distinction: the constructs are either active (functional), or passive (constructs that are acted upon).
It is now reasonable to see these as a process model where some ‘things’ (process inputs) are operated upon by an action (or function) for some period of time, generating different ‘things’ (process output) according to some prescription (parameters).
In the IT domain, material and conceptual things exist. Information systems are concerned with material things, such as machines and computer hardware, forms, listings, tools and people. This world also deals with conceptual things – ideas, plans, strategies, computer software, time.
Fortunately, there’s a crisp boundary between ‘real processes’ and the IT domain – a machine/instrumentation interface. This comprises computer hardware on the IT side, with transducers on the process side. The computer hardware supports the data and program components of the IT world. So it’s reasonable to expect an information systems model of process components to consist of modelling and operational constructs composed of data (data objects) and computer programs (function objects).
A set of computer hardware (including computers, peripherals, comms networks and operating systems) and function objects, called ‘enablers’, form the operational environment to support an IT model of an industrial process. Some of the enabler function objects manage data objects, while others manage function objects (and some manage both).
Industrial process model definition, configuration, and data and function object operational tools must be available, supported by the information system environment. Also, tools for creating data object types (generic models), could be developed. Tools for creating function objects (such as program assemblers and compilers) must also be available.
Most applications are not self contained; they require information developed by other applications, and sometimes alter information in other areas. This sharing of data must be accomplished with support from many levels and nodes of distributed application processing. Throughout, it’s essential to maintain integrity.
Suppose two applications are developed independently such that the data required for their operation is defined, as are the rules for data access. As the information system model is developed, Application A would be more efficient if it could have access to the data ‘owned’ by Application B. So the issue is: where to enforce data access rules?
One solution would require the owner of Application A data to get the co-operation of B data owner, possibly adding new rules and creating an API for access. This solution becomes less satisfactory as the number of applications that need to share data increases. The management of data access rule enforcement quickly becomes very difficult.
A solution that lets different applications share data, without developers having to directly participate in each others’ effort, involves separating the application function mechanism of shared data – with data integrity rules enforced by a ‘rules enforcement function’.
The shared data and rules map directly to data objects. Likewise, the rules enforcement function maps to a function object called the data object access rules enforcer (DOARE). But an application developer needs another function object for management – for scheduling function object execution, understanding object status and dealing with recovery. This we will call: the activity control function (ACF).
Given that generic data objects and function objects must be created, the first concern is where they should be kept. Call it: the model storage repository (MSR). We must now understand how the constructs are related, as well as understanding the data flow requirements. Then we can develop the methodology for managing industrial processes using a model-driven approach.
Function objects contained in such a logical view of a model will generally communicate using a messaging paradigm. A distribution function object handles the comms for delivering message data objects. It’s then a matter of understanding the relationships of the data and function object types for model-driven process management – including DOARE and ACF.
Dick Coulter is a consultant with many years’ experience of simulation and advanced optimisation schemes, mainly in oil, gas and refining industries. You can reach him by e-mail on email@example.com