Configure Production GT.M Instance on OpenVMS

In this article, we will begin to set up a production-grade instance of GT.M on OpenVMS/Alpha. This instance of GT.M will have journaling enabled, as well as having an optimized data storage configuration. We will assume that GT.M has already been installed with the defaults, and that the GT.M logical names from GTM$DIST:GTMLOGICALS.COM are defined.

Contents

Document Conventions

User-supplied parameters will be enclosed in brackets, i.e. <my-parameter>. Where these appear, you will be expected to replace the bracketed parameter with the appropriate values for your system.

The bracketed parameters are listed below:

<home-device>
The OpenVMS volume containing the instance user’s home directory
<journal-device>
The OpenVMS volume containing the instance’s journal files
<data-device>
The OpenVMS volume containing the instance’s global directory and data files
<image-device>
The OpenVMS volume containing the instance’s object files (*.O;*)
<instance-user>
The username of the instance user
<instance-user-name>
The name (i.e. “Test User”, “Development User”, or “Production User”) of the instance user
<group-number>
The group number component of the instance user’s UIC
<user-number>
The user number component of the instance user’s UIC

Physical Storage

This setup will require four storage volumes, in addition to any system and data volumes already in use on your system. Each volume may actually consist of multiple devices in a RAID set. Thus, when we refer to a storage volume, it may refer to any number or configuration of physical storage devices. Storage volumes have names like DKA400: or DKC0: in OpenVMS.

Please choose four storage volumes that are otherwise unused, as this procedure will destroy all existing data on the four volumes chosen.

Home Volume

Referred to as <home-device> in the DCL examples that follow, the home volume contains the instance user’s home directory. The instance user’s home directory will contain two subdirectories, one named “R” and one named “P”. The “R” subdirectory (“ROUTINES”) will contain the source files for any apps defined on the system, and will be writable only by SYSTEM. The “P” subdirectory (“PATCHES”) will contain source code and object files for locally-generated routines and modifications. When executing MUMPS routines in this configuration, routines stored in “P” will override routines stored in “R”, when both routines have the same filename.

When choosing a RAID configuration for the Home Volume, the configuration should favor fault tolerance over speed, and be optimized for sequential I/O.

For this phase of the procedure, and until otherwise specified, I will assume that you are logged into the SYSTEM account.

Let us begin by initializing and mounting the Home Volume, as shown below:

$ INITIALIZE <home-device>: GTMHOME
$ MOUNT/SYSTEM <home-device> GTMHOME

Journal Volume

Referred to as <journal-device> in the DCL examples that follow, the journal volume contains the GT.M journal files for this instance. The journal files provide increased availability for the instance by facilitating recovery when power outages or other events prevent the GT.M database files from being properly quiesced (or “run down” in GT.M parlance).

When choosing a RAID configuration for the Journal Volume, the configuration should favor fault tolerance over speed, and be optimized for sequential I/O, as journal files are only appended to, or read from beginning to end.

We will now initialize and mount the Journal Volume, as shown below:

$ INITIALIZE <journal-device>: GTMJNL
$ MOUNT/SYSTEM <journal-device> GTMJNL

Data Volume

Referred to as <data-device> in the DCL examples that follow, the data volume contains the GT.M global directory and data files for this instance. This is where GT.M will store MUMPS globals, and is arguably the most important volume of all.

When choosing a RAID configuration for the Data Volume, the configuration should balance fault tolerance and speed, and be optimized for random access I/O, as GT.M data will be accessed from unpredictable positions within the data files.

Let’s initialize and mount the Data Volume, as shown below:

$ INITIALIZE <data-device>: GTMDATA
$ MOUNT/SYSTEM <data-device> GTMDATA

Image Volume

Referred to as <image-device> in the DCL examples that follow, the image volume contains the GT.M object files for this instance. These are the actual binaries that will be run by the GT.M environment.

When choosing a RAID configuration for the Image Volume, the configuration should favor speed over fault tolerance, as access times for the object files will directly affect your application’s load times and performance, and object files can typically be regenerated from their respective sources.

Time to initialize and mount the Image Volume, as shown below:

$ INITIALIZE <image-device>: GTMIMG
$ MOUNT/SYSTEM <image-device> GTMIMG

Instance User and Directories

In this step, we will create the user account for the instance user, and create the necessary directories where GT.M will store its data, routines, object files, journals, and local patches.

Multiple users could be created to support multiple instances, but we will focus only on creating one user and the necessary directories. The same procedure applies for creating further instance users.

In the DCL examples that follow, please consult the table of bracketed parameters from Part 1 of this tutorial for definitions of <instance-user>, etc.

We will begin by running the OpenVMS User Authorization Facility (UAF) and adding the user, as shown below:

$ SET DEF SYS$SYSTEM
$ RUN AUTHORIZE
UAF> ADD <instance-user>/PASSWORD=temp/OWNER="<instance-user-name>"/DEV=<home-device>/DIR=[<instance-user>]/UIC=[<group-number>,<user-number>]/FLAG=NODISUSER
%UAF-I-PWDLESSMIN, new password is shorter than minimum password length
%UAF-I-ADDMSG, user record successfully added
%UAF-I-RDBADDMSGU, identifier <instance-user> value [<group-number>,<user-number>] added to rights database
UAF> EXIT
%UAF-I-DONEMSG, system authorization file modified
%UAF-I-RDBDONEMSG, rights database modified

Now we will create the necessary directory structure, as shown below:

$ CREATE/DIRECTORY <home-device>:[<instance-user>]
$ CREATE/DIRECTORY <home-device>:[<instance-user>.r]
$ CREATE/DIRECTORY <home-device>:[<instance-user>.p]
$ CREATE/DIRECTORY <journal-device>:[<instance-user>]
$ CREATE/DIRECTORY <journal-device>:[<instance-user>.j]
$ CREATE/DIRECTORY <data-device>:[<instance-user>]
$ CREATE/DIRECTORY <data-device>:[<instance-user>.g]
$ CREATE/DIRECTORY <image-device>:[<instance-user>]
$ CREATE/DIRECTORY <image-device>:[<instance-user>.o]
$ CREATE/DIRECTORY <image-device>:[<instance-user>.o.50000]

Now, set the ownership on the directories:

$ SET DIRECTORY/OWNER=<instance-user> <home-device>:[<instance-user>]
$ SET DIRECTORY/OWNER=<instance-user> <home-device>:[<instance-user>.r]
$ SET DIRECTORY/OWNER=<instance-user> <home-device>:[<instance-user>.p]
$ SET DIRECTORY/OWNER=<instance-user> <journal-device>:[<instance-user>]
$ SET DIRECTORY/OWNER=<instance-user> <journal-device>:[<instance-user>.j]
$ SET DIRECTORY/OWNER=<instance-user> <data-device>:[<instance-user>]
$ SET DIRECTORY/OWNER=<instance-user> <data-device>:[<instance-user>.g]
$ SET DIRECTORY/OWNER=<instance-user> <image-device>:[<instance-user>]

Now, we will set permissions on the newly-created directories, so that only SYSTEM will be able to write or delete object files in <image-device>:[<instance-user>.o]50000.DIR, as shown below:

$ SET SECURITY /PROTECTION=(S:RWED,O:RE,G:RE,W:"") <image-device>:[<instance-user>.o]50000.DIR
$ SET SECURITY /PROTECTION=(S:RWED,O:RE,G:RE,W:"") <image-device>:[<instance-user>]O.DIR

You will next need to create <home-device>:[instance-user>]LOGIN.COM, including the lines of DCL code shown below; these lines will define the correct values for GTM$GBLDIR (which determines where the GT.M global directory is located) and GTM$ROUTINES (which determines the locations GT.M will search for routines and object files) are set.

$ IF (P1 .NES. "") .AND. (F$EXTRACT(0,1,P1) .NES. "/") THEN P1 := /'P1
$ DEFINE 'P1' GTM$GBLDIR	<data-device>:[<instance-user>.g]MUMPS.GLD
$ DEFINE 'P1' GTM$ROUTINES	 "<home-device>:[<instance-user>.p],<image-device>:[<instance-user>.o.50000]/SRC=<home-device>:[<instance-user>.r],GTM$DIST:"
$ EXIT

Next, add the following lines to SYS$MANAGER:SYSTARTUP_VMS.COM. This will ensure that the newly-created volumes are available:

$ MOUNT/SYSTEM <home-device GTMHOME
$ MOUNT/SYSTEM <journal-device> GTMJNL
$ MOUNT/SYSTEM <data-device> GTMDATA
$ MOUNT/SYSTEM <image-device> GTMIMG

Defining the Global Directory and Creating the Data File

In this installment, we will use the GT.M Global Directory Editor (GDE) and the MUMPS Peripheral Interchange Program (MUPIP) to define the global directory and database file for the instance.

For this instance, you will need to be logged into the <instance-user> account created in the prior installment. This is crucial.

 Historical Note

The MUPIP program’s name has very deep roots. A Peripheral Interchange Program (PIP) was first used in the Digital Equipment Corporation PDP-6 series of computers in the early 1960’s, and later made it into TOPS-10 on the PDP-10, RSTS/E on the PDP-11, and eventually into Gary Kildall’s CP/M operating system, which is largely credited as an early and important foundation of the personal computer revolution. How it got into GT.M is a bit of trivia with which I am as yet unacquainted, but perhaps someone here can shed a little light on the subject.

So, without further ado, here are the commands used to set up your global directory:

$ RUN GTM$DIST:GDE
GDE> CHANGE /SEGMENT $DEFAULT /FILE=<data-device>:[<instance-user>.g]MUMPS.DAT /ALLOC=200000 /BLOCK_SIZE=4096 /LOCK_SPACE=1000 /EXTENSION_COUNT=0
GDE> CHANGE /REGION $DEFAULT /RECORD_SIZE=4080 /KEY_SIZE=255

The above commands bear further explanation.

The first line is the DCL command which will launch the GT.M Global Directory Editor, and should be familiar to anyone who has a passing familiarity with OpenVMS and DCL.

The second line sets the characteristics of the $DEFAULT database segment. The /FILE switch tells GDE to use <data-device>:[<instance-user>.g]MUMPS.DAT to store the data for the segment. The /ALLOC and /BLOCK_SIZE switches instruct GDE to allocate 200,000 blocks of 4,096 bytes each to the segment. The /LOCK_SPACE instructs GDE to allocate 1,000 pages for locking, which can prevent deadlocks under heavy load. The /EXTENSION_COUNT=0 switch instructs GT.M to disable its ability to automatically expand the database when storage grows short. Although you can set EXTENSION_COUNT to a rather arbitrary number of blocks, I do not recommend this practice, as the consequences of filling up your data drive at the OpenVMS level can be more catastrophic than filling up your database file, which will simply halt further writes to the database. A good solution is to employ a script to monitor database usage and notify you when a certain threshold is reached.

It is worth noting that you can calculate your database size by multiplying /BLOCK_SIZE by /ALLOC. In this case, the database will be slightly over 781MB (200,000 blocks * 4,096 bytes per block).

Next, we will set up journaling using the MUPIP program.

Journaling and MUPIP

The following commands will enable journaling to <journal-device>:[<instance-user>.j]<instance-user>.MJL:

$ RUN GTM$DIST:MUPIP
MUPIP> CREATE
Database file for region $DEFAULT created.
$ RUN GTM$DIST:MUPIP
MUPIP> SET /REGION $DEFAULT /JOURNAL=(ENABLE,ON,BEFORE,FILENAME=<journal-device>:[<instance-user>.j]<instance-user>.MJL)
%GTM-I-JNLCREATE, Journal file <journal-device>:[<instance-user>.j]<instance-user>.MJL created for region $DEFAULT
 with BEFORE_IMAGES
%GTM-I-JNLSTATE, Journaling state for region $DEFAULT is now ON

CREATE tells MUPIP to create the .DAT file as specified by the global directory.

The command containing SET /REGION tells MUPIP to enable journaling for region $DEFAULT. ENABLE tells MUPIP that the specified region is ready to be journaled. ON tells MUPIP to create a new journal file (as specified by FILENAME) and begin using the newly-created file to record future journal entries. BEFORE instructs GT.M’s journaling system to archive data blocks prior to modifying them, and enables the use of the rollback recovery facility on the specified region.

Now that the database is being journaled, we need only to set the ownership and permissions on MUMPS.DAT and MUMPS.GLD to prevent unauthorized access. This procedure is detailed in the DCL example below:

$ SET FILE/OWNER=<instance-user> <data-device>:[<instance-user>.g]MUMPS.GLD
$ SET FILE/OWNER=<instance-user> <data-device>:[<instance-user>.g]MUMPS.DAT
$ SET SECURITY /PROTECTION=(S:RWED,O:RWE,G:RWE,W:"") <data-device>:[<instance-user>.g]MUMPS.GLD
$ SET SECURITY /PROTECTION=(S:RWED,O:RWE,G:RWE,W:"") <data-device>:[<instance-user>.g]MUMPS.DAT

The instance is now created with journaling and security protections in place. You can now install any local MUMPS applications’ routines into <home-device>:[<instance-user>.r].

External Links

GT.M Administration and Operations Guide for OpenVMS