Event Handling in FreeM

FreeM implements synchronous event handling as defined in ANSI X11.6 (MWAPI) and asynchronous event handling as proposed in MDC Type A extension proposal X11/1998-28, with several significant vendor-specific extensions. Though the M Development Committee’s use of the terms “synchronous” and “asynchronous” are technically correct, the MWAPI and X11/1998-28 event handling models’ use of the terms may seem somewhat unusual or foreign to those accustomed to event handling in world-wide web technologies, such as JavaScript. The remainder of this article will explore in some depth the X11/1998-28 and MWAPI event handling models as well as the architecture with which FreeM implements and extends them.

Synchronous Events

In M parlance, a synchronous event is one originating from a graphical user interface defined in the M Windowing API (MWAPI). To begin accepting and processing synchronous events, normal, procedural M code must execute the ESTART command, which implicitly enters an event processing loop. ESTART will block the flow of M code execution on the code path in which ESTART was invoked: M code immediately following ESTART will not execute until a synchronous event handler subroutine outside the primary code path of the application calls ESTOP to stop the implicit event processing loop.

Synchronous event handlers are typically registered in the ^$WINDOW structured system variable. The following code will create the window with ID myWindow, register CLOSE^WINDOW as the event handler for the CLOSE event class on window ID myWindow–called when the close gadget is pressed or the window is closed by other means. It will then begin the implicit synchronous event processing loop:

  SET W("EVENT","CLOSE")="CLOSE^WINDOW" ; Create a window definition
  MERGE ^$WINDOW("myWindow")=W ; After this MERGE, the window will appear
  ESTART ; This enters the implicit event processing loop
  QUIT ; Will not execute until CLOSE^WINDOW calls ESTOP
  ESTOP ; Stop synchronous event processing

Other metadata about the CLOSE event, including the window ID and window type (among others) would be supplied to CLOSE^WINDOW by populating nodes of the ^$EVENT structured system system variable, which is implicitly NEWed prior to the relevant nodes being populated and CLOSE^WINDOW being invoked.

In FreeM, the ESTART event processing loop for the above code sample takes the following steps:

  1. Check if ESTOP has been called. If so, exit the event processing loop and proceed to the next command following ESTART.
  2. Wait for GTK to have a user interface event in its own internal queue. During this step, the FreeM thread in which the event loop runs goes to sleep.
  3. At this point, the user closes window myWindow.
  4. Check the received GTK event information against FreeM’s table of windows, and see if a CLOSE event handler has been registered for this window ID (myWindow).
    1. If so, implicitly execute NEW ^$EVENT, populate it with metadata about the event and window from which the event originated, and then execute the M subroutine specified at ^$WINDOW(“myWindow”,”EVENT”,”CLOSE”). When that subroutine (in this case, CLOSE^WINDOW) exits, return to the top of the event processing loop (step 1).
    2. If not, ignore the event and return to step 1. In the above case, this does not apply, as CLOSE^WINDOW was defined as an event handler for event class CLOSE on window ID myWindow.

The above example illustrates how, from the perspective of ESTART, this type of event processing is indeed synchronous. However, while ESTART is in control, user interface events are still processed asynchronously by the underlying windowing system. This can be confusing, as MWAPI events ride the wire between low-level and high-level concepts, requiring the developer to be at least somewhat familiar with both.

MWAPI–and therefore synchronous events in M–preceded the development of the asynchronous events specification, and unlike asynchronous events, are codified in published and existing MDC standards: specifically, ANSI X11.6.

Asynchronous Events

From the perspective of the M Development Committee, asynchronous event processing exists only as a Type A Extension–specifically, extension X11/1998-28. This extension was proposed by Arthur B. Smith in September 1996 and elevated to Type A extension status–as document X11/SC15/1998-6–in June 1998 at an MDC meeting in Boston, MA. As of this writing, FreeM is the only implementation known to have implemented any part of this proposal.

Event Classes

Each asynchronous event is broadly categorized into an event class, referred to as an evclass in relevant standards. FreeM event classes are as follows:

Event ClassDescription
COMMAllows application code to respond to communications events
HALTAllows applications to handle HALT events
IPCSupports inter-process communication
INTERRUPTAllows applications to respond to operating system interrupt signals
POWERIntended to allow applications to respond to imminent power failure messages from uninterruptible power supplies
TIMERSupports the asynchronous execution of an M subroutine after a specified time has elapsed
TRIGGER (non-standard)Allows an M subroutine to run when data in an M global is accessed, changed, or deleted
USERDesigned to support user-defined events
WAPIReserved for MWAPI events–MWAPI only supports synchronous event processing at the time of writing
FreeM Event Classes

Event Identifiers

Beyond the event class, events are further categorized into specific event identifiers, referred to in relevant standards as evids. Event identifiers are often used as a sort of sub-type within a particular event class. Therefore, a particular, specific event is identified by the pairing of its event class and its event identifier.

In short, event classes indicate broad categories of events, while event identifiers indicate specific types of events within an event class.

Registering Asynchronous Event Handlers

Registering an event handler is the mechanism by which the M programmer associates an event class and event identifier with an M subroutine that the M implementation will execute when that event occurs. For example, if we wanted to run the RESIZE^ASNCDEMO M routine any time the user’s terminal window was resized, we’d want to handle an event with event class INTERRUPT, and event identifier SIGWINCH. The following code will associate the above event class and identifier with the RESIZE^ASNCDEMO subroutine:

  WRITE "The terminal was resized!",!

Much like synchronous events, metadata about asynchronous events–if any such metadata exists–is populated in the ^$EVENT structured system variable. As an explanation of all possible subscripts and values of ^$EVENT is far beyond the scope of this article, you are encouraged to consult your M vendor’s documentation for more information. As of this writing, that would mean consulting the FreeM manual: no other known M implementation has yet implemented this Type A extension.

Starting and Stopping Asynchronous Event Processing

Though the action of the code above will associate an M subroutine with an event class and identifier, this alone will not cause the M implementation to begin processing asynchronous events. Much like ESTART begins processing synchronous events, ASTART must be run before asynchronous event processing can occur. The ASTART command looks like this:

ASTART:postcondition [[evclass,...] | [(evclass,...)]

As is typical with M commands, ASTART supports argumentless, inclusive, and exclusive forms. In its argumentless form, ASTART will begin asynchronous event processing for all event classes. In its inclusive form, ASTART will begin asynchronous event processing for only the specified event classes. Finally, the exclusive form of ASTART begins asynchronous event processing for all event classes except those specified.

Let’s further flesh out our ASNCDEMO routine to enable asynchronous event processing for the INTERRUPT event class:

  WRITE "The terminal was resized!",!

While the above code will definitely enable asynchronous event processing for INTERRUPT events, the user would never see any output from the event handler, as the program would quit prior to any event occurring: unlike ESTART for synchronous events, ASTART is always non-blocking. Therefore, in the above example, ASTART “INTERRUPT” will enable asynchronous event processing for INTERRUPT events and return immediately. As the next command in the routine is QUIT, the routine will immediately exit. The non-blocking nature of ASTART is a primary reason why asynchronous events in M are so named: they do not block the primary code path or enter an implicit event loop.

Due to the non-blocking nature of ASTART, asynchronous event processing in M probably makes the most sense for applications that provide their own loop: for instance, an application that displays a menu, accepts a selection, performs processing, and then re-displays its menu, or, an application that runs in an I/O loop gathering data, processing it, and storing results.

Blocking and Unblocking Asynchronous Events

Each asynchronous event class is paired with an event block counter specific to that event class. This counter is a simple integer, and when nonzero, rather than executing the M subroutine associated with that event class, when an event of that class occurs, it will instead be queued for later processing. This mechanism is implicitly employed on invocation of an event handler subroutine: prior to entering the event handler, the event block counters for all event classes are incremented by one, ensuring that the execution of one event handler can never be interrupted by the execution of another. Similar to M’s incremental LOCK, event blocking is also incremental, as the block counter for an event’s event class must be zero in order for its event handlers to execute.

Event blocking and unblocking can also be achieved manually via the ABLOCK and AUNBLOCK commands, whose syntax are thus:

ABLOCK:postcondition [[evclass,...] | [(evclass,...)]
AUNBLOCK:postcondition [[evclass,...] | [(evclass,...)]

In their argumentless forms, ABLOCK and AUNBLOCK will increment or decrement the event block counters for all event classes. In their inclusive forms, they will increment or decrement the event block counters for only the specified event classes. In their exclusive forms, they will increment or decrement the event block counters for all event classes except those listed.

Remember earlier, when we mentioned that an argumentless ABLOCK is implicitly executed prior to entering an event handler subroutine, in order to prevent asynchronous event handlers from interrupting each other? Although not a feature for either the faint of heart or those without exceptionally sharp minds for writing reentrant code, it is possible (though not generally recommended) to AUNBLOCK one or more event event classes inside of an event handler to enable such reentrant behavior. The pitfalls and risks to logical integrity of M globals are so great that you should do so only with a preponderance of caution and prodigious and careful use of LOCK around global variable accesses in such event handlers: here there be dragons!

FreeM Extension: System-Wide Asynchronous Events

In FreeM, the X11/1998-28 extension has been extended to support events that will be recognized in all FreeM processes on the system, rather than being limited to the current process only. The only difference is in the registering of event handlers: rather than registering handlers in ^$JOB($JOB,”EVENT”,…), system-wide event handlers are registered in ^$SYSTEM(“EVENT”,…).

FreeM Asynchronous Event Handling Architecture

FreeM employs an event queue for asynchronous events, shared across all event classes. External signals from the operating system will interrupt the flow of the FreeM C code, calling callbacks internal to FreeM that will enqueue an event, along with its event class, event identifier, and metadata, in the event queue, and interpreter execution will resume. If the event is not important enough to immediately interrupt the interpreter, the event queue will be checked and handlers run after the current M command completes. If the event is extremely important, FreeM will raise error condition ZASYNC. Once ZASYNC is raised, at the next checkpoint where FreeM checks for an error condition, the internal error handler will be invoked. When $ECODE is ZASYNC, FreeM will immediately de-queue the event queue and execute all pending event handlers prior to resuming normal program execution.

CI/CD for FreeM on Real UNIX

FreeM is a highly portable M implementation. As of this writing, it builds and runs on the following systems:

  • GNU+Linux (Ubuntu, Debian, Slackware, OpenSUSE, Raspbian) on i386, armv6l, armv7l, aarch64, x86-64, and s390x
  • Sun/Oracle Solaris 10 and 11 on i86pc and sparc64
  • HP Tru64 UNIX (a.k.a. Digital UNIX, a.k.a. OSF/1) on alpha
  • SCO OpenServer 5.0.7 on i386
  • IBM AIX 5L 5.1 on ppc
  • GNU HURD 0.9 on i386
  • NetBSD/amd64
  • OpenBSD/amd64
  • FreeBSD/amd64
  • Mac OS X on amd64

As the current FreeM maintainer, and an avid retrocomputing enthusiast, I am committed to supporting all of these–and more–permanently. However, being a single developer, building and testing each of these architecture/OS combinations for each increment of the FreeM codebase is a hugely difficult task if done in a manual way. CI/CD platforms (like GitLab CI, Jenkins, and Rally), have no build agent support for many of these systems, and even getting SSH working can be a real challenge–and when you do, you may not have the ability to support the most modern encryption protocols.

Yet, a solution was needed. I would have to develop such a solution myself.

I began investigating the problem early on in my stewardship of the FreeM codebase, and decided that I needed to find out the lowest common denominator of automation, networking, and scripting capabilities all of these systems could support. This is what I arrived upon:

  • TCP/IP (using IPv4) is universally available
  • All of them have some support for cron
  • NFS v2 or greater, though NFS v3 is spotty and NFS v4 is rare
  • Vanilla Bourne shell (some variant of ksh is also relatively common, but I saw no reason to dig into its specifics, as all of the ksh variants will support vanilla Bourne shell constructs if you’re careful)

FreeM is developed from a locally-hosted GitLab git repository. It became obvious early on that doing a git pull as a core mechanic from each build host of my CI solution would not be feasible, as the git software has extensive prerequisites that many old UNIX systems are incapable of providing.

A central file server, using NFS v2, exports a filesystem for use by all build farm hosts. It contains a master list of build farm hosts in a file called servers.list, each line of which contains the short hostname (equivalent to hostname -s) of one build farm host. The filesystem also has a subdirectory corresponding to each of the build farm hosts, where the code of FreeM will be deposited and worked on. Each build farm host mounts this filesystem at /var/nas/freem-build.

There are a number of files corresponding to the current build status on each host (success, failure, running), of which only one will ever exist concurrently. Each host also has a log file containing the output of the most recent build attempt, and potentially a file to indicate that a build has been requested on that host.

I developed a series of Bourne shell scripts:

  • frm-bldall will request a build on all build hosts in servers.list by creating empty files with a certain naming convention in /var/nas/freem-build
  • frm-bldlog will display the latest build log output for a particular build host
  • frm-bldstat will display the build status (success, failure, running) for a particular build host
  • frm-build will attempt a configure and make on the current host
  • frm-chkbuild will check for the existence “build requested” file for the current build host in /var/nas/freem-build, and run frm-build if it exists (run from each host’s root user crontab every five minutes)
  • frm-cloneall will git clone the FreeM code repository for all build farm hosts (run from a post-commit hook on the GitLab server when a tagged release is pushed)
  • frm-commit will make sure FreeM can be successfully built on the local machine, and if so, will update the semantic version info in the appropriate files, update the change log, prepare a tagged commit, and push it to GitLab, which will run frm-cloneall and frm-bldall in its post-commit hook
  • frm-reclone re-clones the source repository for a requested build host (will not run if the requested build host is currently running a build)
  • frm-reqbuild requests a new build from a specific build host

The various elements generated by this CI system are also used to populate the build status page on the FreeM website.

The system, while already quite useful, has a number of glitches still to be ironed out:

  • Since all the build hosts run the build as root, there are permissions issues that have yet to be ironed out. In a future release, there will be a user account for the CI system with matching UIDs on each system.
  • There are occasional race conditions.

Eventually, I will enhance the system to be more generic (supporting projects other than FreeM), and also extend it to generate native binary packages for each supported platform.

In spite of GNU+Linux dominance, I am committed to supporting 1990s-style portability across all UNIX systems, and I hope that these tools will eventually enable others to do the same for their own projects.

I Am The Anti-Web: Part 1

This multi-part series will explore the reasons why the modern World Wide Web and its ill-designed suite of languages, protocols, and ecosystem are the single most harmful development in the entire history of computing. Within it, we will make every effort to bring down its technologies, its proponents, and the false economies it has engendered. No effort will be wasted on attempting to justify it, nor to show charity to those involved.


My desktop computer has the following specs:

  • (2) Intel Xeon E5680 6-core, 12-thread processors at 3.33GHz
  • 48GB of PC2100 DDR ECC RAM
  • NVIDIA GeForce GTX-1080 Founders Edition GPU
  • (2) 240GB 6g/s SATA SSDs in RAID0 (OS and apps)
  • (4) 2TB 10,000RPM 6g/s SATA HDDs (data)
  • Debian GNU/Linux 10, running the latest proprietary NVIDIA graphics drivers
  • Windows 7 Professional is available by way of a dual-boot configuration, though this is very rarely employed

The desktop application for Slack, the popular messaging and collaboration platform, takes 13.35 seconds on my machine to switch workspaces and render its basic view. It should also be noted that I have a 400Mbit/sec Internet connection here, and my workstation connects to the core switch by way of a pair of gigabit Ethernet cables in an LACP bond.

The reason for this is that the Slack desktop application is not a native application at all. It is a JavaScript and HTML 5 application that targets the Electron framework, which allows web developers to produce desktop-style applications that run on Windows, macOS, and Linux. Discord and Skype are also built upon the same technology, which bundles the Chromium browser and its V8 JavaScript environment into application packages, and allows JavaScript to access underlying operating system services.

Evil corporations love this technology, as the proliferation of code monkeys adept at copying and pasting from W3Schools and Stack Overflow makes labor cheap (at least on the surface–technical debt from this generation of irresponsible “developers” is likely to be higher than anything we’ve ever seen), and they can target all three major platforms from a single codebase. With a sufficiently large army of marketing drones, a lack of alternatives, these companies have brainwashed their users into believing that an application which displays a spinning progress indicator for more than ten seconds, just to render its basic view, is an acceptable user experience.

Look! We can chase our own tails for 13.35 seconds!

My first computer, having a 4.77MHz 8088 CPU and 512KB of RAM, could repaginate an entire WordStar document, or recalc an entire Lotus 1-2-3 spreadsheet in this much time or less, and the basic shell of the application views were rendered in sub-second timeframes. A modern native application (one written in a real programming language, using real operating system APIs), with all the flashy UI chrome and graphics demonstrates the same level of performance.

In the early to mid 1990s, developers attempting to use Visual Basic for commercial applications were ridiculed and told to go learn C++ or even Pascal, because VB (until version 5) was a threaded p-code implementation, rather than a true compiled language, and performance thus suffered. But, even the worst-performing Visual Basic application can render its views much, much faster than any Electron application, while running on a 16MHz 386SX with no FPU!

Woke AF

I suppose that the culture of the day is to blame, as the majority of modern web “developers” are crunchy hipster trend-slaves, sitting in front of their MacBooks at Starbucks, sipping on their half-caf no-whip skinny kombucha soy abominations and repeating argumentum ad populum to themselves until they believe that everything that’s new must be true, while changing technology stacks faster than Taylor Swift changes boyfriends.

Got a long list of ex-frameworks, they’ll tell you I’m insane…

Much of this is just bad economics: the Silicon Valley modus operandi is to come up with an idea (synergizing), beat the soul out of it in focus groups (market research), get Vulture Capitalist funding (where a simple equity position means “we’ll take the equity, you assume the position”), release the most minimally-functional, poor-performing pile of slop you can (rapid iteration), sell it to a greedy and evil Fortune 500 (here’s your millions, now, give us your soul), take your money, and go do something else. There is no desire in this shitfest shark-tank of capitalism run amok to actually build good products or lasting developer institutions. It’s a one-night stand, wham-bam-thank-you-ma’am, entrepreneurial trainwreck.

And, the developers aren’t even leaving bus fare on the nightstand for their hapless users.

We must do better.

Prefiniti: Architecture and Updates

The old Prefiniti codebase (WebWare.CL and Prefiniti 1.0/1.5/1.6) was bleeding-edge at the time of its original implementation (circa 2007-2009), as it used a technique called AJAX (Asynchronous Javascript And XML), which allowed all navigation operations within the site to load only the parts of the page that were to be changed.

Essentially, Prefiniti implemented what today would be called a “container/fragment” approach, where a single container page’s DOM contains “div” elements with a specific ID attribute into which “fragment” pages would be loaded. In the case of Prefiniti, the container pages were called webwareBase.cfm, appBase.cfm, Prefiniti-Steel-1024×768.cfm, or prefiniti_framework_base.cfm (depending on which Prefiniti version we’re discussing). What all of these container pages have in common is a pair of HTML div elements called sbTarget and tcTarget, which stand for “sidebar target” and “time collection target”, respectively. sbTarget is normally a left-hand navigation sidebar containing an accordion control, while tcTarget is the main element to which application content is loaded and rendered. It is so named because the time collection component of Prefiniti was the first to use AJAX techniques.

There is a utility function written in JavaScript, called AjaxLoadPageToDiv(), which would take as arguments the ID attribute of a DOM element, and a URL which would be loaded into and rendered within that DOM element. If the DOM element was tcTargetAjaxLoadPageToDiv() would look within the loaded document for XML tags wwafcomponent, wwafsidebar, wwwafdefinesmap, wwwafpackage, and wwaficon. These tags (where wwaf stands for WebWare Application Framework) would determine the component name, contextual sidebar, package name, and icon of the content being loaded, and trigger a recursive load of the appropriate sidebar fragment into sbTarget.

The difficulty with this approach arose from the legacy of the application: the direct predecessor of WebWare.CL/Prefiniti was a simple order form for customers to order land surveys from a local surveying firm, Center Line Services. This original application did not use AJAX at all, and employed some legacy techniques in its use of server-side rendering, which I’ll explain here:

Prefiniti is implemented in a programming language and application server known as ColdFusion. Upon receiving an HTTP request for a ColdFusion template, which is denoted by a .cfm file extension, ColdFusion looks in the current directory for a file called Application.cfm, which it will run and render prior to the requested template. Application.cfm’s job is to set up session variables, application timeouts, cookies, etc. for things like user authentication and maintaining application state. If Application.cfm is not found in the same directory as the requested template, ColdFusion will traverse all parent directories up to the site’s document root until it finds one. Once Application.cfm is run and rendered, ColdFusion will run and render the template that was requested, and then look for OnRequestEnd.cfm (using the same directory traversal rules as used by Application.cfm), and run and render it.

This is not a bad technique, except that the original application on which WebWare.CL/Prefiniti was based used Application.cfm to render DOCTYPE, html, head, and body elements, along with a site header, navigation menubar, and a toolbar, and OnRequestEnd.cfm would close these tags, while any requested template would fill in the rest of the page body as appropriate.

The problem with this manifested when AjaxLoadPageToDiv() would request a fragment to be loaded into tcTarget and sbTarget, the fragment also being a ColdFusion template. Application.cfm would be processed in the normal way, and the header, navbar, and toolbar–which was only supposed to exist at the top of the page, above the sbTarget and tcTarget div elements–would be repeated within both sbTarget and tcTarget.

At this point in the application’s development, Application.cfm had grown tremendously complex, and I, as a relatively green CF developer, couldn’t figure out how to move the visual content out of it and into the container template (webwareBase.cfm et. al.) in order to fix the problem correctly. My solution at the time was to place fragments into subdirectories (tc, workFlow, socialnet, businessnet, etc.) of the document root, each subdirectory having an empty Application.cfm file within it, to prevent rendering of the parent Application.cfm within sbTarget and tcTarget. This worked, except that page fragments no longer had access to any session state, including the ID of the currently logged-in user.

My solution to this problem was to generate JavaScript on the server-side that would create front-end JS variables for each needed session variable, and have that JS code run when the application’s login form was submitted, and have AjaxLoadPageToDiv() pass all of those variables to fragment pages as part of the HTTP query string. This meant that all form submissions required custom JavaScript to build a GET request that would collect form fields’ values and submit them to the back-end, which is a horrible abuse of GET (the HTTP standards require that such submissions be POSTed instead, placing the form fields within the body of the request, rather than in the URL). It also meant that session timeouts were handled poorly, security problems were many, and adding new features to the application was complex and difficult, requiring a great deal of JavaScript code that bloated the initial load of the application to unreal proportions.

In the current re-factor of Prefiniti, these problems have nearly all been mitigated. Visual rendering has all been moved out of Application.cfm and into prefiniti_framework_base.cfm, the empty Application.cfm templates in the application subdirectories (tc, workFlow, socialnet, etc.), have all been removed, and page fragment templates now have full access to session state. The process to strip out dependencies on GET requests and huge query strings is in progress, and most of the JavaScript bloat will thus be easy to remove, future-proofing the application and making it secure, and much easier to maintain and extend. This also has the benefit that the server-side modules for core framework functionality and database I/O can be loaded once for the entire application and made available to page fragments with no additional effort.

UI updates are also on the way, by way of Bootstrap 4, making Prefiniti a modern, responsive, and mobile-ready platform for web applications.

Here’s to the future!

Why UTF-8 is a train wreck (or: UNIX Doesn’t Represent Everyone)

This post won’t go into the gory details of Unicode or the UTF-8 encoding. That ground has been covered better elsewhere than I could ever hope to here. What we’re looking at today is almost as much political as technical, although technical decisions play a huge part in the tragedy. What I am positing today is that UTF-8–for all its lofty compatibility goals–fails miserably in the realm of actual, meaningful compatibility.

The supposed brilliance of UTF-8 is that its code points numbered 0-127 are entirely compatible with 7-bit ASCII, so that a data stream containing purely ASCII data will never need more than one byte per encoded character. This is all well and good, but the problem is that aside from UNIX and its derivatives, the vast majority of ASCII-capable hardware and software made heavy use of the high-order bit, specifying characters for code points 128-255. However, the UTF-8 encoding either chokes on or specifies control characteristics using the high-order bit, to include encoding whether or not the character specified will require a second byte.  This makes 7-bit ASCII (as well as encodings touting 7-bit ASCII compatibility) little more than a mental exercise for most systems: like it or not, the standard for end-user systems was set by x86 PCs and MS-DOS, not UNIX, and MS-DOS and its derivatives make heavy use of the high-order bit. UNIX maintained 7-bit purity in most implementations, as mandated by its own portability goals, and UTF-8’s ultimate specifications were coded up on a New Jersey diner placemat by Ken Thompson, the inventor of UNIX, and Rob Pike, one of its earliest and most prolific contributors. UTF-8 effectively solved the problem for most UNIX systems, which were pure 7-bit systems from the beginning. But why should UTF-8’s massive shortcomings have been foisted upon everyone else, as if UNIX–like many of its proponents–was some playground bully, shoving its supposed superiority down everyone else’s throats?

It should not. The UNIX philosophy, like functional programming, microkernels, role-based access control, and RISC, has its merits, but it is not the only kid on the block, and solutions like UTF-8 that just happen to work well in UNIX shouldn’t be forced upon environments where they only break things. Better to make a clean break to a sane, fixed-width encoding like UTF-32, perhaps providing runtimes for both ASCII (including its 8-bit extensions) and the new encoding to allow software to be ported to use it piecemeal. At least with something like UTF-32, data from other encodings can be programmatically converted to it, whereas with UTF-8 with its two-bit 8th-bit meddling, there’s no way of knowing whether you’re dealing with invalid code points, kludgey shift characters, or some ASCII extension that was used for a meaningful purpose.

ArcaOS 5.0: UNIAUD Update

I have discovered that part of my UNIAUD  audio driver problem can be solved. Using the command line UNIMIX.EXE, I can manually set the speaker output level. Turns out that there was actually sound being generated, but only to the headphone jack.

There’s still another problem, however: desktop sounds repeat and are very noisy and filled with static.

I will be publishing a few screenshots of ArcaOS in the coming days.

VistA Innovation?

VistA cannot evolve if its MUMPS code is viewed as the unfortunately obsolete back-end for Node.js applications.

If we buy into the current prevailing wisdom that we should essentially leave VistA’s MUMPS code in maintenance mode, enshrining its current structure and shortcomings, we are implicitly asking for it to be slowly phased out, and replaced with something else.

Adding blobs of red and mouse support to ScreenMan forms is not VistA innovation.

Building hundreds of new RPC broker calls for consumption by Javascript code is not VistA innovation.

Building tools to paper over the cracks in KIDS and DIFROM is not VistA innovation.

Writing web frameworks that expose MUMPS globals and VistA RPCs is not VistA innovation.

Even if you use every DevOps tool and agile methodology that is trending on Reddit while you’re doing these things, it’s not VistA innovation.

We can wax eloquent at great length saying that lab and scheduling are the keys to the kingdom, but the very best lab and scheduling packages really aren’t VistA innovation.

We are at this point essentially putting lipstick on a pig. The pig may be a tremendously powerful and intelligent wild boar that can do thousands of things normal pigs can’t do, but wrestling with it will still leave a bruise.

That’s not to say that DevOps tools, web frameworks, packaging ideas, or any of these projects and ideas aren’t innovative. They are, and everyone who does that work deserves praise and appreciation for it. But these are accessories. Nice, useful, pretty, and even essential accessories. But are they VistA? No. VistA is 30,000+ MUMPS routines, written in a style that was in vogue during the Reagan administration.

VistA’s entire MUMPS codebase needs to be refactored. Not replaced, but refactored in a way that reflects all the great and useful techniques that computer science has taught us since the underground railroad went mainstream. And yes, I mean APIs. I mean separation of concerns. I mean (perhaps controversially) that the SAC needs to quit forbidding mnemonically useful identifiers, and instead start forbidding us to leak data through local variables. Well-defined interfaces, that cannot speak to any component of the software that is more than one layer of abstraction away from it. Interfaces forming a strong contract between the software and the developers who develop against it.

MUMPS is more than up for the task. We have scoped variables with NEW. We have call by value and call by reference. We can pass complex objects to and return complex objects from well-defined methods in the form of MUMPS extrinsic functions and the glorious dot operator. Every modern implementation of the MUMPS language supports at least 31-character identifiers and large routines, so that routine names like ZZTQPRL3 are now not only unnecessary, but indefensible.

VistA cannot survive if we have the hubris to maintain that its design is sacrosanct, and superior by definition to new technologies. Along with this, we can no longer pretend that medical software is any different from other complex software, nor can we lie to ourselves and say that MUMPS–or hierarchical database technology in general–is inherently superior to other database technologies in our domain, and finally, we cannot continue insisting that advances made in programming methodology and software architecture don’t apply to us.

It’s been asserted–but not once effectively proven or even rationalized–that these computer science concepts (layers of abstraction, interface contracts, APIs, and separation of concerns) somehow don’t apply to medical software, or to VistA. I’ve personally heard arguments ranging from “APIs are an attack vector” to “VistA is a living organism, unlike any other software.”

Poppycock. Absolute rubbish. So completely wrong as to be comical.

First, VistA is utterly loaded with APIs. Every time someone calls into Kernel or FileMan, that’s an API. Every time someone writes a new RPC, that’s an API. And every one of them is as much an “attack vector” as it is in modern software. The only real difference is that ours aren’t well-architected, ours don’t separate concerns, ours are poorly documented, ours require way too many arguments, and ours have horrible names that nobody can remember.

Second, software is software is software. The things that make an operating system unmaintainable make an EHR unmaintainable. The things that make a word processor maintainable make an EHR maintainable. Even the argument that hierarchical databases are somehow inherently better-suited to medical data than relational databases (or network databases, or any other database) is specious and silly. Perhaps this was arguably true in the 1970s, but it is not true today. Every data structure that you can represent in FileMan, you can represent in Oracle or MySQL or DB2, with minimal fuss. Look at Practice Fusion. Look at Amazing Charts. The hip, new EHRs are all based on modern databases. It can be done.

It’s been argued that MUMPS’ lack of schema makes it easier to change the database to match the evolution of medical data without re-writing the software. Again, rubbish. Once FileMan is in the picture, we are right back to employing a schema that requires UI modifications once we change it. FileMan enforces its own schema on data organization. True though it is that external modules like ScreenMan make it relatively easy to propagate schema changes into the user interface, this same sort of ease exists in relational databases with technologies like ORM, LINQ, and others. And today, there are methodologies that make it even easier to propagate schema changes all the way up to the UI. If software developers employs proper separation of concerns and strong interface contracts, changes to the schema are transparent to the UI.

VistA admits of no such discipline.

In VistA, user interface, business logic, schema definition, and data storage are tangled together like Christmas lights in the box in Grandma’s attic. You can’t even programatically define a new FileMan file; it’s all done through interactive green-screen UIs, and distributed in KIDS builds, the installation of which are notoriously error-prone.

MUMPS has the facilities to make all of these nightmares disappear, and where it shines is in throughput, robustness, and scalability. It has great facilities for indirection, data hiding, abstraction, and all the other tools we need to make VistA even more awesome than it is. Just takes some time and dedication. It’s also fast. Extremely fast. Like, bullet train fast.

And VistA is awesome. The developers in its community are awesome and have tons of enthusiasm. But today, its core infrastructure needs some serious attention. MUMPS and VistA are kind of like a gorgeous Tudor mansion: scores of beautiful, ornate, and useful rooms, but all the pipes leak, the wallpaper is peeling, and the light in that one downstairs bathroom is always flickering for some reason. And we’ve lost the blueprints.

The VA wants to bulldoze the house and put up a shopping mall, a Best Buy, and a McDonald’s. In the meantime, they’ll throw some glue behind the wallpaper and set up buckets underneath the leaky pipes.

But the house is free for public consumption and improvement! So instead of doing what they’ve been doing, let’s fix the plumbing, put in some new wallpaper, and fix the electrical system. And while we’re at it, we can add central heating and a gourmet kitchen.

That is VistA innovation.


What I want as a computer user vs. what I want as a computer programmer are often polar and contradictory opposites.

As a programmer, I want computer languages and operating systems that give me complete, direct, and possibly dangerous control over the hardware. The power to not only create blisteringly fast and efficient code, but also to bring down the whole machine on a whim. If I can break the hardware in the process, all the better.

This is the essence of my earliest exposure to programming: writing BASIC programs in MS-DOS, with assembly language for the performance-critical sections. Translating the assembly language into the hexadecimal digits representing machine code, loading these codes into a BASIC string, and calling the address of the string directly. Getting into the muck of segmented memory, PEEKing and POKEing memory directly, engaging in dirty tricks to push the hardware to its limits, writing self-modifying code to squeeze every bit of power out of every CPU cycle. The antithesis of the UNIX philosophy.

As a user, I want all of the above to disappear, and programmers to be forced into high-level, safe, and nominally interpreted languages, on protected-mode operating systems that erect impregnable walls between software and hardware. As a user, I essentially want my computer to be as much of an appliance as my toaster.

If I get what I want as a programmer, the users’ lives become frustrating, and the audience and ultimate reach of computing is reduced.

If I get what I want as a user, most programmers’ lives become tedium and drudgery, as they’re reduced from engineers to technicians.

However, if I get what I want as a programmer, perhaps computers become once again the exclusive domain of geeks who will never again equate a CD-ROM tray to a cupholder or ignore the distinction between the web and the Internet, or refer to crackers as hackers. Doesn’t sound like a bad outcome, from my perspective.

It’s probably better for humanity that I don’t get my way. Just don’t take away my old DOS machine and my old assembler, lest I become even more of a curmudgeonly old fart than I already am.


Memories and Dynamic Textboxes

Around 2000, I had the dubious honor of enhancing–under contract–a VB6 application that interfaced to MODBUS radio telemetry units. Instead of using a ListBox or some other appropriate control, it employed a control array of TextBox controls to visualize raw sensor voltages from remote units.  The kicker to all this was that the code was all attached directly to a timer control that would go out and poll the units (this subroutine was about 20 pages long, mixing up MODBUS parsing, UI updates, radio communication, and virtually every part of the application munged together). When this code needed to do something with the received data, it read it back from the dynamic text boxes it had created during its poll cycle.

Dynamic textboxes are bad, mmmkay?
Dynamic textboxes are bad, mmmkay?

I refactored the code into a much better system and UI without dynamic textboxes. The new UI showed a tree view of all remote units and allowed reporting on each one, as well as fancy charts and graphs. Each sensor on each remote unit could have a custom math formula to turn its raw voltages into human-readable data. My version also logged data to a SQL Server database for archival and later analysis.

I was supposed to be hired by the company that originally made the software in order to turn it into a full-fledged SCADA (systems control and data acquisition) suite, but various situations with them and the organization to which I was contracted precluded that job move.

I have long since moved into the Linux and medical systems world, most recently doing Node.js development and EHR support. But this, my first programming job, has always stuck with me as a real “baptism by fire” moment, with which many fun memories are associated. I still have a fond place in my heart for VB6–with all its warts–but the process of creating from memory the little UI mock-up for the image I used on this post (which was done in VB6 in a Windows NT 4.0 virtual machine on my Linux box) makes me realize how far we’ve come and why we should never hope to go back.

What I want in a platform…

Just a small wishlist/rant:


No more software buttons. Why do hardware engineers feel the need to make my volume controls, mute button, and power button require intervention from the operating system? Do it in hardware. I really don’t want a volume change to have to wait on the kernel to schedule the handling of its interrupt.

Whether this is a laptop, desktop, or server, make the chassis out of steel that doesn’t flex. I’m not so weak that I need a one-ounce laptop to be satisfied with life.

Casework should be toolless, and components modular easily field-upgradable–even on a laptop. Come up with several form factors for a laptop chassis, and allow the guts to be upgraded over time. Standardize, standardize, standardize. The largest one or two form factors for a laptop should not make weight a consideration.

Plenty of USB ports is a definite must, but I also want internal 10/100/1000 Ethernet and one RS-232 serial port. Also, give me a modular bay that can accommodate a 5.25″ device (optical drive, or even a floppy).

Instruction Set and CPU Architecture

A nice, orthogonal CISC instruction set with lots of flexible datatypes and addressing modes. Lots of general-purpose registers. Give me an instruction to copy bytes, words, etc. directly from RAM to RAM. Don’t make me sidestep through a register. If you can’t do this in pure hardware, microcode it. I don’t particularly care if we’re big-endian or little-endian, but I do like the POWER ISA’s ability to switch endianness. I want a flat virtual memory address space. If we have hardware multitasking (which we should) and the ability to support both little- and big-endian byte ordering, allow me to have different endianness per task segment. Allow CPU microcode to be software-upgradable.

I want several rings of privilege, and although I’m aware that ring transitions require lots of work, please optimize them as much as humanly possible. Don’t give me CALL instructions that are so inefficient as to be less usable than combinations of many more primitive instructions.

Give me lots of I/O channels (with DMA, obviously). I/O processing should be offloaded properly to a specialized ASIC or ASICs. Each I/O bus (PCIe, VME, SCSI, etc.) should be connected to an I/O processor ASIC, and this processor should have a uniform API  in the firmware (see below) abstracting away the differences between the various attached buses.

Give me massive L1 cache, and good support for multiple CPUs. Multiple complete CPU support should be given more priority than multiple cores/threads per CPU.


I want a firmware with a fully-realized CLI. Give me a dedicated Ethernet interface for firmware, and allow me to natively redirect console output with VNC over this Ethernet connection, or via ssh if the console is not in graphical mode. The CLI should give me the ability to enumerate all hardware devices, configure things like SCSI IDs, memorable device aliases, Ethernet MAC addresses, etc. I should be able to boot from any partition on any connected storage with a simple command. I should also be able to dump registers and do kernel debugging through this CLI.

The firmware should have an API that is fully supported in all CPU modes without hacks like virtual 8086 mode. This API should allow me to enumerate and access all devices and memory easily. It should be available to a first-stage bootloader.

More to come…