Event Handling in FreeM

FreeM implements synchronous event handling as defined in ANSI X11.6 (MWAPI) and asynchronous event handling as proposed in MDC Type A extension proposal X11/1998-28, with several significant vendor-specific extensions. Though the M Development Committee’s use of the terms “synchronous” and “asynchronous” are technically correct, the MWAPI and X11/1998-28 event handling models’ use of the terms may seem somewhat unusual or foreign to those accustomed to event handling in world-wide web technologies, such as JavaScript. The remainder of this article will explore in some depth the X11/1998-28 and MWAPI event handling models as well as the architecture with which FreeM implements and extends them.

Synchronous Events

In M parlance, a synchronous event is one originating from a graphical user interface defined in the M Windowing API (MWAPI). To begin accepting and processing synchronous events, normal, procedural M code must execute the ESTART command, which implicitly enters an event processing loop. ESTART will block the flow of M code execution on the code path in which ESTART was invoked: M code immediately following ESTART will not execute until a synchronous event handler subroutine outside the primary code path of the application calls ESTOP to stop the implicit event processing loop.

Synchronous event handlers are typically registered in the ^$WINDOW structured system variable. The following code will create the window with ID myWindow, register CLOSE^WINDOW as the event handler for the CLOSE event class on window ID myWindow–called when the close gadget is pressed or the window is closed by other means. It will then begin the implicit synchronous event processing loop:

  SET W("EVENT","CLOSE")="CLOSE^WINDOW" ; Create a window definition
  MERGE ^$WINDOW("myWindow")=W ; After this MERGE, the window will appear
  ESTART ; This enters the implicit event processing loop
  QUIT ; Will not execute until CLOSE^WINDOW calls ESTOP
  ESTOP ; Stop synchronous event processing

Other metadata about the CLOSE event, including the window ID and window type (among others) would be supplied to CLOSE^WINDOW by populating nodes of the ^$EVENT structured system system variable, which is implicitly NEWed prior to the relevant nodes being populated and CLOSE^WINDOW being invoked.

In FreeM, the ESTART event processing loop for the above code sample takes the following steps:

  1. Check if ESTOP has been called. If so, exit the event processing loop and proceed to the next command following ESTART.
  2. Wait for GTK to have a user interface event in its own internal queue. During this step, the FreeM thread in which the event loop runs goes to sleep.
  3. At this point, the user closes window myWindow.
  4. Check the received GTK event information against FreeM’s table of windows, and see if a CLOSE event handler has been registered for this window ID (myWindow).
    1. If so, implicitly execute NEW ^$EVENT, populate it with metadata about the event and window from which the event originated, and then execute the M subroutine specified at ^$WINDOW(“myWindow”,”EVENT”,”CLOSE”). When that subroutine (in this case, CLOSE^WINDOW) exits, return to the top of the event processing loop (step 1).
    2. If not, ignore the event and return to step 1. In the above case, this does not apply, as CLOSE^WINDOW was defined as an event handler for event class CLOSE on window ID myWindow.

The above example illustrates how, from the perspective of ESTART, this type of event processing is indeed synchronous. However, while ESTART is in control, user interface events are still processed asynchronously by the underlying windowing system. This can be confusing, as MWAPI events ride the wire between low-level and high-level concepts, requiring the developer to be at least somewhat familiar with both.

MWAPI–and therefore synchronous events in M–preceded the development of the asynchronous events specification, and unlike asynchronous events, are codified in published and existing MDC standards: specifically, ANSI X11.6.

Asynchronous Events

From the perspective of the M Development Committee, asynchronous event processing exists only as a Type A Extension–specifically, extension X11/1998-28. This extension was proposed by Arthur B. Smith in September 1996 and elevated to Type A extension status–as document X11/SC15/1998-6–in June 1998 at an MDC meeting in Boston, MA. As of this writing, FreeM is the only implementation known to have implemented any part of this proposal.

Event Classes

Each asynchronous event is broadly categorized into an event class, referred to as an evclass in relevant standards. FreeM event classes are as follows:

Event ClassDescription
COMMAllows application code to respond to communications events
HALTAllows applications to handle HALT events
IPCSupports inter-process communication
INTERRUPTAllows applications to respond to operating system interrupt signals
POWERIntended to allow applications to respond to imminent power failure messages from uninterruptible power supplies
TIMERSupports the asynchronous execution of an M subroutine after a specified time has elapsed
TRIGGER (non-standard)Allows an M subroutine to run when data in an M global is accessed, changed, or deleted
USERDesigned to support user-defined events
WAPIReserved for MWAPI events–MWAPI only supports synchronous event processing at the time of writing
FreeM Event Classes

Event Identifiers

Beyond the event class, events are further categorized into specific event identifiers, referred to in relevant standards as evids. Event identifiers are often used as a sort of sub-type within a particular event class. Therefore, a particular, specific event is identified by the pairing of its event class and its event identifier.

In short, event classes indicate broad categories of events, while event identifiers indicate specific types of events within an event class.

Registering Asynchronous Event Handlers

Registering an event handler is the mechanism by which the M programmer associates an event class and event identifier with an M subroutine that the M implementation will execute when that event occurs. For example, if we wanted to run the RESIZE^ASNCDEMO M routine any time the user’s terminal window was resized, we’d want to handle an event with event class INTERRUPT, and event identifier SIGWINCH. The following code will associate the above event class and identifier with the RESIZE^ASNCDEMO subroutine:

  WRITE "The terminal was resized!",!

Much like synchronous events, metadata about asynchronous events–if any such metadata exists–is populated in the ^$EVENT structured system variable. As an explanation of all possible subscripts and values of ^$EVENT is far beyond the scope of this article, you are encouraged to consult your M vendor’s documentation for more information. As of this writing, that would mean consulting the FreeM manual: no other known M implementation has yet implemented this Type A extension.

Starting and Stopping Asynchronous Event Processing

Though the action of the code above will associate an M subroutine with an event class and identifier, this alone will not cause the M implementation to begin processing asynchronous events. Much like ESTART begins processing synchronous events, ASTART must be run before asynchronous event processing can occur. The ASTART command looks like this:

ASTART:postcondition [[evclass,...] | [(evclass,...)]

As is typical with M commands, ASTART supports argumentless, inclusive, and exclusive forms. In its argumentless form, ASTART will begin asynchronous event processing for all event classes. In its inclusive form, ASTART will begin asynchronous event processing for only the specified event classes. Finally, the exclusive form of ASTART begins asynchronous event processing for all event classes except those specified.

Let’s further flesh out our ASNCDEMO routine to enable asynchronous event processing for the INTERRUPT event class:

  WRITE "The terminal was resized!",!

While the above code will definitely enable asynchronous event processing for INTERRUPT events, the user would never see any output from the event handler, as the program would quit prior to any event occurring: unlike ESTART for synchronous events, ASTART is always non-blocking. Therefore, in the above example, ASTART “INTERRUPT” will enable asynchronous event processing for INTERRUPT events and return immediately. As the next command in the routine is QUIT, the routine will immediately exit. The non-blocking nature of ASTART is a primary reason why asynchronous events in M are so named: they do not block the primary code path or enter an implicit event loop.

Due to the non-blocking nature of ASTART, asynchronous event processing in M probably makes the most sense for applications that provide their own loop: for instance, an application that displays a menu, accepts a selection, performs processing, and then re-displays its menu, or, an application that runs in an I/O loop gathering data, processing it, and storing results.

Blocking and Unblocking Asynchronous Events

Each asynchronous event class is paired with an event block counter specific to that event class. This counter is a simple integer, and when nonzero, rather than executing the M subroutine associated with that event class, when an event of that class occurs, it will instead be queued for later processing. This mechanism is implicitly employed on invocation of an event handler subroutine: prior to entering the event handler, the event block counters for all event classes are incremented by one, ensuring that the execution of one event handler can never be interrupted by the execution of another. Similar to M’s incremental LOCK, event blocking is also incremental, as the block counter for an event’s event class must be zero in order for its event handlers to execute.

Event blocking and unblocking can also be achieved manually via the ABLOCK and AUNBLOCK commands, whose syntax are thus:

ABLOCK:postcondition [[evclass,...] | [(evclass,...)]
AUNBLOCK:postcondition [[evclass,...] | [(evclass,...)]

In their argumentless forms, ABLOCK and AUNBLOCK will increment or decrement the event block counters for all event classes. In their inclusive forms, they will increment or decrement the event block counters for only the specified event classes. In their exclusive forms, they will increment or decrement the event block counters for all event classes except those listed.

Remember earlier, when we mentioned that an argumentless ABLOCK is implicitly executed prior to entering an event handler subroutine, in order to prevent asynchronous event handlers from interrupting each other? Although not a feature for either the faint of heart or those without exceptionally sharp minds for writing reentrant code, it is possible (though not generally recommended) to AUNBLOCK one or more event event classes inside of an event handler to enable such reentrant behavior. The pitfalls and risks to logical integrity of M globals are so great that you should do so only with a preponderance of caution and prodigious and careful use of LOCK around global variable accesses in such event handlers: here there be dragons!

FreeM Extension: System-Wide Asynchronous Events

In FreeM, the X11/1998-28 extension has been extended to support events that will be recognized in all FreeM processes on the system, rather than being limited to the current process only. The only difference is in the registering of event handlers: rather than registering handlers in ^$JOB($JOB,”EVENT”,…), system-wide event handlers are registered in ^$SYSTEM(“EVENT”,…).

FreeM Asynchronous Event Handling Architecture

FreeM employs an event queue for asynchronous events, shared across all event classes. External signals from the operating system will interrupt the flow of the FreeM C code, calling callbacks internal to FreeM that will enqueue an event, along with its event class, event identifier, and metadata, in the event queue, and interpreter execution will resume. If the event is not important enough to immediately interrupt the interpreter, the event queue will be checked and handlers run after the current M command completes. If the event is extremely important, FreeM will raise error condition ZASYNC. Once ZASYNC is raised, at the next checkpoint where FreeM checks for an error condition, the internal error handler will be invoked. When $ECODE is ZASYNC, FreeM will immediately de-queue the event queue and execute all pending event handlers prior to resuming normal program execution.

FreeM History

Since 2014, I have been the maintainer of the primary fork of the FreeM implementation of the M programming language and database. I thought I would take some time to share some of the history of FreeM, as well as its current status and goals.

How I got involved in M and FreeM

My mentor in computer programming and UNIX was Larry Landis, who got involved heavily in the M/MUMPS programming language ca. 1991. He hyped up the M language to me from 1991 forward, and first demonstrated FreeM to me in August 1998. In 2010, I incorporated my company, Coherent Logic Development, learned M, and began doing contract work in M through Larry’s company, Fourth Watch Software.

Larry was the owner of FreeM’s SourceForge repository, which had not been touched in a number of years, following Fidelity National Information Services’ decision to release GT.M under a free software license. In August 2011, I downloaded the source code for FreeM and did enough work on it to get it running under modern GNU/Linux systems and posted it to the mumpster.org forums.

In 2014, Larry gave me administrator access to the FreeM SourceForge repository and transferred maintainership of the project to me.

Early History of FreeM

FreeM was developed in Germany in the mid-1990s by a developer who went by the pseudonym “Shalom ha-Ashkenaz”, whose actual identity remains unknown, though it is thought by some that he is or was a dentist who learned C and developed FreeM on his own time. Shalom developed FreeM at a time when Terry Ragon of InterSystems (the company that developed the ISM implementation of M) was buying up all of his competitors and shutting them down. Shalom wished to provide a community-driven, open-source implementation of M as a bulwark against the growing threat of single-vendor hegemony over the M language. Its design–as well as some of the documentation included with the original sources–indicate that FreeM was originally targeted to the MS-DOS family of operating systems. It made use of a very limited subset of the C library, and included instructions for renaming the MS-DOS style 8.3 filenames in order to compile under UNIX.

At one point in FreeM’s early history, Shalom ported FreeM from MS-DOS to SCO UNIX, the SVR3-based descendant of Microsoft XENIX, now known as SCO OpenServer–a platform still supported by FreeM. This port brought support for the scoansi terminal type, including colors and X.364 control mnemonics.

Enter the GUMP

Around the time Shalom ha-Ashkenaz was developing FreeM, Richard F. Walters, a professor from U.C. Davis, conceived of the GUMP, an acronym standing for “Generic Universal M Project”. The GUMP, following the object-oriented programming craze of the 1990s, was intended to be a toolkit allowing M implementations to be built from discrete components with a well-defined and well-specified public interface among these components. These components included the global handler (supplying the database functionality), and the interpreter/compiler (responsible for implementing M language commands). The components would have been able to communicate over a network, or in-process on the same host, enabling distributed computing functionality.

Although the specification for the GUM interface to global handlers attained a reasonably well-specified level of completeness, and Larry Landis and others developed a mostly-complete implementation of a GUM global handler, none of the other envisioned components were ever completed, and specifically, the interpreter component was missing.

Shalom’s Gift

In July of 1998, Shalom ha-Ashkenaz donated the FreeM source code (then known as FreeMUMPS) to the M User’s Group-Deutschland (MUG-D), hoping that the community would take the nascent implementation from its infancy through to a state of production-ready completeness and robustness. Shalom also placed a few conditions on his gift: a public release could not be made until a substantial set of milestones were reached. Per his conditions, the FreeMUMPS project must:

  • Implement the entirety of X11.1-1995
  • Use Structured System Variables instead of VIEW commands/functions
  • Raise the string size limits
  • Implement MWAPI, OMI, X11 bindings, and GKS bindings
  • Be substantially free of major bugs

Although MUG-D readily accepted the contribution of FreeMUMPS, the organization itself lacked the manpower and expertise to complete the implementation. Just as it is now, the intersection of M community members who know enough of the M language and C language to work on a project this ambitious was quite small.

Merging GUMP and FreeM

Very shortly after the contribution of FreeMUMPS to MUG-D, Richard F. Walters and a small team of developers and administrative staff who had been working on the GUMP assumed maintainership of the FreeMUMPS source code. This included representatives from the M Technology Association (an M vendor association having several foreign branches), the M Development Committee (the M standards organization hosting the ANSI/ISO standards for the M language, then sponsored by the M Technology Association), and others. The goals of this team were to:

  • Meet Shalom’s requirements for a public release of FreeMUMPS
  • Convert FreeMUMPS into the first interpreter component of the GUMP

During this era, Ronald L. Fox of Hawaii (who passed in 2010) ported FreeMUMPS from SCO UNIX to Red Hat 5 and glibc-6. Steve “Saintly” Zeck also attempted to rewrite the symbol table code to lift string size limits, David Whitten enhanced some of the implementation-specific extensions, and Larry Landis integrated Saintly’s symbol table work.

Early on in the GUMP maintainership of FreeM, the name of the implementation was changed from FreeMUMPS to Public Standard M, a change which was changed to Free Standard MUMPS and then FreeM when it was discovered that the PSM acronym was already in use for Patterson & Gray’s M implementation. Dr. Walters also received the implementation ID of 49 from then secretary of the M Development Committee, Don Piccone.

One of the contributors to FreeM at this stage–mainly in the area of M vendor routines–was Axel Trocha, who would go on to develop and maintain his own private fork of FreeM.

The GT.M Release

GT.M, standing for Greystone Technology MUMPS, is an M implementation that was released by Greystone Technology in 1986. Greystone was later acquired by Sanchez Computer Associates, which was in turn acquired by Fidelity National Information Services.

When GT.M was released under a free software license in 2000, it seemed to negate the entire raison d’etre for FreeM, as GT.M was a well-established, robust, and high-performance M implementation with which FreeM could not then compete. Unfortunately, at this time, the GUMP and FreeM projects lost all of their momentum, and new development along these lines rapidly ceased. The final GUMP team release of FreeM was 0.5.0. However, Axel Trocha’s private port would continue.

Axel Trocha’s FreeM Fork

After FreeM’s momentum ceased within the primary branch of development under Richard F. Walters’ leadership, Axel Trocha, an aforementioned contributor of M vendor routines and member of Dr. Walters’ team, continued development on the FreeM source code. Axel added many interesting features to the FreeM codebase, including:

  • A native port to Microsoft Windows
  • Compiling FreeM as an Apache web server module, allowing FreeM to be used easily for web development
  • The ability to output HTML code in a heredoc-style format, with any line of code beginning with a left angle bracket being interpreted as HTML with support for interpolated M locals and globals
  • Extensions allowing FreeM to be used as a command-line shell, along the lines of UNIX bash, Windows cmd.exe, etc.

Axel also maintains ownership of the freem.net Internet domain, and continued issuing public releases of his FreeM port on that site until sometime after 2003, at which point he took his port entirely private. Currently, freem.net is a blank page. However, Axel’s fork of FreeM continues to this day as the back-end database and programming environment for the www.elvenrunes.de website. I have communicated with Axel occasionally.

Resuming Primary Development Branch

In 2011, I downloaded the FreeM source code from the GUM Project’s SourceForge repository–dormant since 2000–and updated it just enough that it would compile and run on modern GNU/Linux systems. I also quickly updated FreeM to support terminal sizes larger than 80×24.

Taking Maintainership

In 2014, Larry Landis gave me administrator access to the GUMP repository, transferring maintainership of the primary branch of FreeM development to me. Since then, I have added many features and corrected many bugs, including:

  • Adding support for proper namespaces, configured through /etc/freem.conf, which standardizes routine and global storage locations
  • Adding support for Structured System Variables
  • Adding support for the asynchronous event specification from the MDC Millennium Draft Standard
  • Adding support for constants through the CONST keyword
  • Adding a WITH keyword that allows you to specify an implicit prefix to all subsequent variable references
  • Adding a runtime watch command (ZWATCH), which tracks changes to specified locals or globals
  • Adding a ZASSERT command, which will fail with an error message if the following expression evaluates false
  • Adding support for operators such as ++, –, +=, etc.
  • Removing the Steve “Saintly” Zeck symbol table implementation, which was unreliable, and reverting to Shalom’s original implementation
  • Adding support for the GNU readline library, with persistent command line history and editing, as well as some level of command-line completion
  • Adding REPL-like functionality (in direct mode, any M expression beginning with a number will be prepended with an implicit WRITE)
  • Adding transaction processing (a work in progress)
  • Adding support for the M Windowing API (MWAPI), which is also a work in progress
  • Adding the “fmadm” command-line utility, for database administration functions
  • Adding support for after-image journaling and forward database recovery
  • Adding support for TCP and UDP client sockets, for both IPv4 and IPv6
  • Writing a texinfo manual, from which the HTML manual is derived
  • Porting to Solaris/SPARC, Solaris/x86, Linux/s390x, Linux/armv6l, Linux/armv7l, SCO OpenServer 5.0.7, Tru64 UNIX/alpha, AIX/ppc, Mac OS X/x86, GNU HURD, Cygwin, NetBSD, FreeBSD, OpenBSD, and WSL1/2

I have also created the https://freem.coherent-logic.com website, where distribution downloads and documentation are readily available.


FreeM is moving towards being a client-oriented desktop M implementation, for developing graphical user interfaces that will run on mobile and desktop devices.

I also intend to adopt the original vision of the GUMP team, dividing FreeM’s functionality into discrete components having a well-specified public interface, with the ability to run in distributed computing environments over a network.

FreeM’s mission is to advance the state-of-the-art in M implementations, and push the evolution of the language forward. Maintaining portability to as many vintage and modern UNIX systems as possible is held as a high priority, while portability of M routines and MDC standards compliance will be maintained only to the extent that it does not conflict with the primary goal of elegantly advancing the state-of-the-art and finding new audiences for the concepts originated by Neil Pappalardo and Octo Barnett in 1966.

FreeM is also strongly committed to free software principles, and is firmly aligned with the goals of the GNU Project and the Free Software Foundation, believing that the ethical concerns surrounding proprietary software are at least as important as the practical concerns of “open-source”.

FreeM is also being developed as a tool for enabling application development in worker/tenant cooperatives, and is committed to social justice and progressive principles.

If you are interested in FreeM, please see https://freem.coherent-logic.com for more information.

CI/CD for FreeM on Real UNIX

FreeM is a highly portable M implementation. As of this writing, it builds and runs on the following systems:

  • GNU+Linux (Ubuntu, Debian, Slackware, OpenSUSE, Raspbian) on i386, armv6l, armv7l, aarch64, x86-64, and s390x
  • Sun/Oracle Solaris 10 and 11 on i86pc and sparc64
  • HP Tru64 UNIX (a.k.a. Digital UNIX, a.k.a. OSF/1) on alpha
  • SCO OpenServer 5.0.7 on i386
  • IBM AIX 5L 5.1 on ppc
  • GNU HURD 0.9 on i386
  • NetBSD/amd64
  • OpenBSD/amd64
  • FreeBSD/amd64
  • Mac OS X on amd64

As the current FreeM maintainer, and an avid retrocomputing enthusiast, I am committed to supporting all of these–and more–permanently. However, being a single developer, building and testing each of these architecture/OS combinations for each increment of the FreeM codebase is a hugely difficult task if done in a manual way. CI/CD platforms (like GitLab CI, Jenkins, and Rally), have no build agent support for many of these systems, and even getting SSH working can be a real challenge–and when you do, you may not have the ability to support the most modern encryption protocols.

Yet, a solution was needed. I would have to develop such a solution myself.

I began investigating the problem early on in my stewardship of the FreeM codebase, and decided that I needed to find out the lowest common denominator of automation, networking, and scripting capabilities all of these systems could support. This is what I arrived upon:

  • TCP/IP (using IPv4) is universally available
  • All of them have some support for cron
  • NFS v2 or greater, though NFS v3 is spotty and NFS v4 is rare
  • Vanilla Bourne shell (some variant of ksh is also relatively common, but I saw no reason to dig into its specifics, as all of the ksh variants will support vanilla Bourne shell constructs if you’re careful)

FreeM is developed from a locally-hosted GitLab git repository. It became obvious early on that doing a git pull as a core mechanic from each build host of my CI solution would not be feasible, as the git software has extensive prerequisites that many old UNIX systems are incapable of providing.

A central file server, using NFS v2, exports a filesystem for use by all build farm hosts. It contains a master list of build farm hosts in a file called servers.list, each line of which contains the short hostname (equivalent to hostname -s) of one build farm host. The filesystem also has a subdirectory corresponding to each of the build farm hosts, where the code of FreeM will be deposited and worked on. Each build farm host mounts this filesystem at /var/nas/freem-build.

There are a number of files corresponding to the current build status on each host (success, failure, running), of which only one will ever exist concurrently. Each host also has a log file containing the output of the most recent build attempt, and potentially a file to indicate that a build has been requested on that host.

I developed a series of Bourne shell scripts:

  • frm-bldall will request a build on all build hosts in servers.list by creating empty files with a certain naming convention in /var/nas/freem-build
  • frm-bldlog will display the latest build log output for a particular build host
  • frm-bldstat will display the build status (success, failure, running) for a particular build host
  • frm-build will attempt a configure and make on the current host
  • frm-chkbuild will check for the existence “build requested” file for the current build host in /var/nas/freem-build, and run frm-build if it exists (run from each host’s root user crontab every five minutes)
  • frm-cloneall will git clone the FreeM code repository for all build farm hosts (run from a post-commit hook on the GitLab server when a tagged release is pushed)
  • frm-commit will make sure FreeM can be successfully built on the local machine, and if so, will update the semantic version info in the appropriate files, update the change log, prepare a tagged commit, and push it to GitLab, which will run frm-cloneall and frm-bldall in its post-commit hook
  • frm-reclone re-clones the source repository for a requested build host (will not run if the requested build host is currently running a build)
  • frm-reqbuild requests a new build from a specific build host

The various elements generated by this CI system are also used to populate the build status page on the FreeM website.

The system, while already quite useful, has a number of glitches still to be ironed out:

  • Since all the build hosts run the build as root, there are permissions issues that have yet to be ironed out. In a future release, there will be a user account for the CI system with matching UIDs on each system.
  • There are occasional race conditions.

Eventually, I will enhance the system to be more generic (supporting projects other than FreeM), and also extend it to generate native binary packages for each supported platform.

In spite of GNU+Linux dominance, I am committed to supporting 1990s-style portability across all UNIX systems, and I hope that these tools will eventually enable others to do the same for their own projects.

Lucee Installation on Solaris 11 (SPARC)

Here, we will look at the process of installing the Lucee CFML application server under Apache Tomcat 8 on Oracle Solaris 11 on SPARC machines. We will also set up an Apache proxy with mod_cfml to ease the process of deploying Apache VirtualHosts without having to configure each virtual host twice (in Apache itself, as well as Tomcat’s server.xml file).

My Setup

  • The server is a SunFire T2000 with an UltraSPARC-T1 CPU. Great performance for Java workloads like this.
  • I’m running Solaris 11.3, as 11.4 does not support the UltraSPARC-T1 CPU.
  • I’m running this setup in a Solaris non-global zone.


You will need to install the following packages from the “solaris” Oracle IPS repository:

  • apache-22 (as of this writing, Apache 2.4 segfaults when using mod_proxy)
  • tomcat-8
  • gnu-make
  • gnu-sed

You will also need Oracle Solaris Studio 12.4, with its bin directory in your $PATH (ahead of any other path containing the “cc” binary), in order to get a C compiler that will take the options that Apache’s apxs module tool will attempt to use. This is a free download to those with an Oracle Technology Network (OTN) account, so you will need one of those too. As of this writing, OTN accounts are also free.

Newer versions of Oracle Solaris Studio/Oracle Developer Studio require a patch on UltraSPARC-T1 machines (Solaris SRU 20) to enable the VIS hardware capability. You must have a valid Oracle Solaris support contract to obtain this patch.

Please consult the appropriate Solaris documentation for how to install packages.

I have not tested this procedure on x86, or Solaris 11.4. If using versions of Solaris other than 11.3, or a different CPU architecture, your mileage may vary.

Installing Lucee Application Server


  1. Download the latest Lucee JAR file from https://download.lucee.org/
  2. Stop the Tomcat servlet container
  3. Place the downloaded JAR file into the Tomcat 8 lib directory
  4. Add Lucee’s servlet and mapping configuration to Tomcat web.xml
  5. Restart Tomcat
  6. Make sure Tomcat is running


Downloading Lucee JAR File

Using your web browser, download the latest Lucee JAR file from https://download.lucee.org, and transfer it to your Solaris server (if not downloading directly from the server itself).

Stop Tomcat

svcadm disable svc:/network/http:tomcat8

Place JAR In Tomcat Lib Directory

cp /path/to/lucee-n.n.n.n.jar /usr/tomcat8/lib

Configure Lucee Servlet and Mappings

Open /var/tomcat8/conf/web.xml for editing, and add the following lines to the bottom of the file:

 <description>Lucee CFML Engine</description>

 <description>Lucee Servlet for RESTful services</description>



Start Tomcat

svcadm enable svc:/network/http:tomcat8

Make Sure Tomcat is Running

Type the following command:

svcs | grep tomcat8

You should see output similar to the following:

online         20:52:13 svc:/network/http:tomcat8

If the first column of output does not say online, check /var/tomcat8/logs/catalina.out for any error messages, and check /var/tomcat8/conf/web.xml for syntax errors and/or omissions before running svcadm restart svc:/network/http:tomcat8 again.

Configure Apache 2.2 Proxy


  1. Create /etc/apache2/2.2/conf.d/lucee_proxy.conf
  2. Restart the Apache SMF service
  3. Make sure Apache is running


Create Proxy Configuration

In your favorite editor, open /etc/apache2/2.2/conf.d/lucee_proxy.conf, and place the following lines into it:

<IfModule mod_proxy.c>
 <Proxy *>
  Allow from

 ProxyPreserveHost On
 ProxyPassMatch ^/(.+\.cf[cm])(/.*)?$ ajp://$1$2
 ProxyPassMatch ^/(.+\.cfchart)(/.*)?$ ajp://$1$2
 ProxyPassMatch ^/(.+\.cfml)(/.*)?$ ajp://$1$2
 ProxyPassReverse / ajp://

Restart Apache SMF Service

Type the following command into your terminal:

svcadm restart svc:/network/http:apache22

Make Sure Apache is Running

Type the following command:

svcs | grep apache22

You should see output similar to the following:

online         20:48:12 svc:/network/http:apache22

If the first column of output does not show online, check /etc/apache2/2.2/conf.d/lucee_proxy.conf for syntax errors and restart Apache 2.2 once more.

Build mod_cfml From Source

The mod_cfml project’s GitHub sources contain three invalid characters at the very beginning of mod_cfml.c. I have created an archive of the corrected sources, which is available at http://ftp.coherent-logic.com/pub/solaris/lucee/mod_cfml/mod_cfml-solaris-source.tar.gz (also available via FTP).

You will need to download this file to your Solaris server, and unarchive it with the following command:

tar zxf mod_cfml-solaris-source.tar.gz

This will leave a mod_cfml-master subdirectory in the place where you un-tarred the archive.

Navigate to mod_cfml-master/C, and type the following commands:

export PATH=/opt/solarisstudio12.4/bin:$PATH
APXS=/usr/apache2/2.2/bin/apxs gmake
APXS=/usr/apache2/2.2/bin/apxs gmake install

This will build mod_cfml.so and install it into /usr/apache2/2.2/libexec.

If you get errors during the build process, make sure you have typed all the above commands correctly, and that you have the gnu-make, gnu-sed, and Oracle Solaris Studio 12.4 correctly installed.

Configure Apache with mod_cfml


  1. Add mod_cfml configuration to /etc/apache2/2.2/httpd.conf
  2. Restart Apache 2.2
  3. Make sure Apache 2.2 is running


Add mod_cfml Configuration to Apache 2.2

In your preferred editor, open /etc/apache2/2.2/httpd.conf and add the following lines to the bottom:

LoadModule modcfml_module libexec/mod_cfml.so
CFMLHandlers ".cfm .cfc .cfml"
LogHeaders false
LogHandlers false

Restart Apache 2.2

Type the following command:

svcadm restart svc:/network/http:apache22

Make Sure Apache 2.2 is Running

Type the following command:

svcs | grep apache22

You should see output similar to the following:

online         20:48:12 svc:/network/http:apache22

If the first column of output does not contain online, check /etc/apache2/2.2/httpd.conf to make sure you did not make any typos while adding the mod_cfml configuration lines to it, and restart Apache 2.2 once again.

Configure mod_cfml Valve in Tomcat 8


  1. Copy mod_cfml-valve_v1.1.05.jar to /usr/tomcat8/lib/
  2. Add valve configuration to /var/tomcat8/conf/server.xml
  3. Restart Tomcat 8 SMF service
  4. Make sure Apache 2.2 and Tomcat 8 are running


Copy mod_cfml Valve JAR to Tomcat 8 Library Directory

You will need to now navigate to the mod_cfml-master/java directory created when you untarred mod_cfml in a prior step. Within that directory, type the following command:

cp mod_cfml-valve_v1.1.05.jar /usr/tomcat8/lib/

Add mod_cfml Valve Configuration to Tomcat 8

Open /var/tomcat8/conf/server.xml for editing, and find the <Host> entry that begins with the following:

<Host name="localhost"  appBase="webapps"
      unpackWARs="true" autoDeploy="true">

After the <Host> tag, but before </host>add the following lines of XML code:

      scanClassPaths="false" />

This will enable the mod_cfml valve for localhost.

Restart Tomcat 8 SMF Service

Type the following command:

svcadm restart svc:/network/http:tomcat8

Make Sure Tomcat 8 and Apache 22 are Running

Type the following command:

svcs | egrep "tomcat8|apache22"

You should see output similar to the following:

online         20:48:12 svc:/network/http:apache22
online         20:52:13 svc:/network/http:tomcat8

If the first columns of output don’t both read online, check /var/tomcat8/conf/server.xml for any inadvertent typos or syntax errors.

You should now be running Lucee successfully! In your favorite web browser, navigate to http://hostname/lucee/admin/server.cfm. If everything worked correctly, you should see the Lucee Server Administrator. Please set secure passwords for both the Server Administrator (global to the entire server), and the Web Administrator for each Lucee web context you create.

As you are now running mod_cfml, any VirtualHost entries you create in Apache will be set up for you automatically in Tomcat, without having to edit and maintain two sets of messy XML configuration files.



I Am The Anti-Web: Part 1

This multi-part series will explore the reasons why the modern World Wide Web and its ill-designed suite of languages, protocols, and ecosystem are the single most harmful development in the entire history of computing. Within it, we will make every effort to bring down its technologies, its proponents, and the false economies it has engendered. No effort will be wasted on attempting to justify it, nor to show charity to those involved.


My desktop computer has the following specs:

  • (2) Intel Xeon E5680 6-core, 12-thread processors at 3.33GHz
  • 48GB of PC2100 DDR ECC RAM
  • NVIDIA GeForce GTX-1080 Founders Edition GPU
  • (2) 240GB 6g/s SATA SSDs in RAID0 (OS and apps)
  • (4) 2TB 10,000RPM 6g/s SATA HDDs (data)
  • Debian GNU/Linux 10, running the latest proprietary NVIDIA graphics drivers
  • Windows 7 Professional is available by way of a dual-boot configuration, though this is very rarely employed

The desktop application for Slack, the popular messaging and collaboration platform, takes 13.35 seconds on my machine to switch workspaces and render its basic view. It should also be noted that I have a 400Mbit/sec Internet connection here, and my workstation connects to the core switch by way of a pair of gigabit Ethernet cables in an LACP bond.

The reason for this is that the Slack desktop application is not a native application at all. It is a JavaScript and HTML 5 application that targets the Electron framework, which allows web developers to produce desktop-style applications that run on Windows, macOS, and Linux. Discord and Skype are also built upon the same technology, which bundles the Chromium browser and its V8 JavaScript environment into application packages, and allows JavaScript to access underlying operating system services.

Evil corporations love this technology, as the proliferation of code monkeys adept at copying and pasting from W3Schools and Stack Overflow makes labor cheap (at least on the surface–technical debt from this generation of irresponsible “developers” is likely to be higher than anything we’ve ever seen), and they can target all three major platforms from a single codebase. With a sufficiently large army of marketing drones, a lack of alternatives, these companies have brainwashed their users into believing that an application which displays a spinning progress indicator for more than ten seconds, just to render its basic view, is an acceptable user experience.

Look! We can chase our own tails for 13.35 seconds!

My first computer, having a 4.77MHz 8088 CPU and 512KB of RAM, could repaginate an entire WordStar document, or recalc an entire Lotus 1-2-3 spreadsheet in this much time or less, and the basic shell of the application views were rendered in sub-second timeframes. A modern native application (one written in a real programming language, using real operating system APIs), with all the flashy UI chrome and graphics demonstrates the same level of performance.

In the early to mid 1990s, developers attempting to use Visual Basic for commercial applications were ridiculed and told to go learn C++ or even Pascal, because VB (until version 5) was a threaded p-code implementation, rather than a true compiled language, and performance thus suffered. But, even the worst-performing Visual Basic application can render its views much, much faster than any Electron application, while running on a 16MHz 386SX with no FPU!

Woke AF

I suppose that the culture of the day is to blame, as the majority of modern web “developers” are crunchy hipster trend-slaves, sitting in front of their MacBooks at Starbucks, sipping on their half-caf no-whip skinny kombucha soy abominations and repeating argumentum ad populum to themselves until they believe that everything that’s new must be true, while changing technology stacks faster than Taylor Swift changes boyfriends.

Got a long list of ex-frameworks, they’ll tell you I’m insane…

Much of this is just bad economics: the Silicon Valley modus operandi is to come up with an idea (synergizing), beat the soul out of it in focus groups (market research), get Vulture Capitalist funding (where a simple equity position means “we’ll take the equity, you assume the position”), release the most minimally-functional, poor-performing pile of slop you can (rapid iteration), sell it to a greedy and evil Fortune 500 (here’s your millions, now, give us your soul), take your money, and go do something else. There is no desire in this shitfest shark-tank of capitalism run amok to actually build good products or lasting developer institutions. It’s a one-night stand, wham-bam-thank-you-ma’am, entrepreneurial trainwreck.

And, the developers aren’t even leaving bus fare on the nightstand for their hapless users.

We must do better.

Prefiniti: Architecture and Updates

The old Prefiniti codebase (WebWare.CL and Prefiniti 1.0/1.5/1.6) was bleeding-edge at the time of its original implementation (circa 2007-2009), as it used a technique called AJAX (Asynchronous Javascript And XML), which allowed all navigation operations within the site to load only the parts of the page that were to be changed.

Essentially, Prefiniti implemented what today would be called a “container/fragment” approach, where a single container page’s DOM contains “div” elements with a specific ID attribute into which “fragment” pages would be loaded. In the case of Prefiniti, the container pages were called webwareBase.cfm, appBase.cfm, Prefiniti-Steel-1024×768.cfm, or prefiniti_framework_base.cfm (depending on which Prefiniti version we’re discussing). What all of these container pages have in common is a pair of HTML div elements called sbTarget and tcTarget, which stand for “sidebar target” and “time collection target”, respectively. sbTarget is normally a left-hand navigation sidebar containing an accordion control, while tcTarget is the main element to which application content is loaded and rendered. It is so named because the time collection component of Prefiniti was the first to use AJAX techniques.

There is a utility function written in JavaScript, called AjaxLoadPageToDiv(), which would take as arguments the ID attribute of a DOM element, and a URL which would be loaded into and rendered within that DOM element. If the DOM element was tcTargetAjaxLoadPageToDiv() would look within the loaded document for XML tags wwafcomponent, wwafsidebar, wwwafdefinesmap, wwwafpackage, and wwaficon. These tags (where wwaf stands for WebWare Application Framework) would determine the component name, contextual sidebar, package name, and icon of the content being loaded, and trigger a recursive load of the appropriate sidebar fragment into sbTarget.

The difficulty with this approach arose from the legacy of the application: the direct predecessor of WebWare.CL/Prefiniti was a simple order form for customers to order land surveys from a local surveying firm, Center Line Services. This original application did not use AJAX at all, and employed some legacy techniques in its use of server-side rendering, which I’ll explain here:

Prefiniti is implemented in a programming language and application server known as ColdFusion. Upon receiving an HTTP request for a ColdFusion template, which is denoted by a .cfm file extension, ColdFusion looks in the current directory for a file called Application.cfm, which it will run and render prior to the requested template. Application.cfm’s job is to set up session variables, application timeouts, cookies, etc. for things like user authentication and maintaining application state. If Application.cfm is not found in the same directory as the requested template, ColdFusion will traverse all parent directories up to the site’s document root until it finds one. Once Application.cfm is run and rendered, ColdFusion will run and render the template that was requested, and then look for OnRequestEnd.cfm (using the same directory traversal rules as used by Application.cfm), and run and render it.

This is not a bad technique, except that the original application on which WebWare.CL/Prefiniti was based used Application.cfm to render DOCTYPE, html, head, and body elements, along with a site header, navigation menubar, and a toolbar, and OnRequestEnd.cfm would close these tags, while any requested template would fill in the rest of the page body as appropriate.

The problem with this manifested when AjaxLoadPageToDiv() would request a fragment to be loaded into tcTarget and sbTarget, the fragment also being a ColdFusion template. Application.cfm would be processed in the normal way, and the header, navbar, and toolbar–which was only supposed to exist at the top of the page, above the sbTarget and tcTarget div elements–would be repeated within both sbTarget and tcTarget.

At this point in the application’s development, Application.cfm had grown tremendously complex, and I, as a relatively green CF developer, couldn’t figure out how to move the visual content out of it and into the container template (webwareBase.cfm et. al.) in order to fix the problem correctly. My solution at the time was to place fragments into subdirectories (tc, workFlow, socialnet, businessnet, etc.) of the document root, each subdirectory having an empty Application.cfm file within it, to prevent rendering of the parent Application.cfm within sbTarget and tcTarget. This worked, except that page fragments no longer had access to any session state, including the ID of the currently logged-in user.

My solution to this problem was to generate JavaScript on the server-side that would create front-end JS variables for each needed session variable, and have that JS code run when the application’s login form was submitted, and have AjaxLoadPageToDiv() pass all of those variables to fragment pages as part of the HTTP query string. This meant that all form submissions required custom JavaScript to build a GET request that would collect form fields’ values and submit them to the back-end, which is a horrible abuse of GET (the HTTP standards require that such submissions be POSTed instead, placing the form fields within the body of the request, rather than in the URL). It also meant that session timeouts were handled poorly, security problems were many, and adding new features to the application was complex and difficult, requiring a great deal of JavaScript code that bloated the initial load of the application to unreal proportions.

In the current re-factor of Prefiniti, these problems have nearly all been mitigated. Visual rendering has all been moved out of Application.cfm and into prefiniti_framework_base.cfm, the empty Application.cfm templates in the application subdirectories (tc, workFlow, socialnet, etc.), have all been removed, and page fragment templates now have full access to session state. The process to strip out dependencies on GET requests and huge query strings is in progress, and most of the JavaScript bloat will thus be easy to remove, future-proofing the application and making it secure, and much easier to maintain and extend. This also has the benefit that the server-side modules for core framework functionality and database I/O can be loaded once for the entire application and made available to page fragments with no additional effort.

UI updates are also on the way, by way of Bootstrap 4, making Prefiniti a modern, responsive, and mobile-ready platform for web applications.

Here’s to the future!

Why UTF-8 is a train wreck (or: UNIX Doesn’t Represent Everyone)

This post won’t go into the gory details of Unicode or the UTF-8 encoding. That ground has been covered better elsewhere than I could ever hope to here. What we’re looking at today is almost as much political as technical, although technical decisions play a huge part in the tragedy. What I am positing today is that UTF-8–for all its lofty compatibility goals–fails miserably in the realm of actual, meaningful compatibility.

The supposed brilliance of UTF-8 is that its code points numbered 0-127 are entirely compatible with 7-bit ASCII, so that a data stream containing purely ASCII data will never need more than one byte per encoded character. This is all well and good, but the problem is that aside from UNIX and its derivatives, the vast majority of ASCII-capable hardware and software made heavy use of the high-order bit, specifying characters for code points 128-255. However, the UTF-8 encoding either chokes on or specifies control characteristics using the high-order bit, to include encoding whether or not the character specified will require a second byte.  This makes 7-bit ASCII (as well as encodings touting 7-bit ASCII compatibility) little more than a mental exercise for most systems: like it or not, the standard for end-user systems was set by x86 PCs and MS-DOS, not UNIX, and MS-DOS and its derivatives make heavy use of the high-order bit. UNIX maintained 7-bit purity in most implementations, as mandated by its own portability goals, and UTF-8’s ultimate specifications were coded up on a New Jersey diner placemat by Ken Thompson, the inventor of UNIX, and Rob Pike, one of its earliest and most prolific contributors. UTF-8 effectively solved the problem for most UNIX systems, which were pure 7-bit systems from the beginning. But why should UTF-8’s massive shortcomings have been foisted upon everyone else, as if UNIX–like many of its proponents–was some playground bully, shoving its supposed superiority down everyone else’s throats?

It should not. The UNIX philosophy, like functional programming, microkernels, role-based access control, and RISC, has its merits, but it is not the only kid on the block, and solutions like UTF-8 that just happen to work well in UNIX shouldn’t be forced upon environments where they only break things. Better to make a clean break to a sane, fixed-width encoding like UTF-32, perhaps providing runtimes for both ASCII (including its 8-bit extensions) and the new encoding to allow software to be ported to use it piecemeal. At least with something like UTF-32, data from other encodings can be programmatically converted to it, whereas with UTF-8 with its two-bit 8th-bit meddling, there’s no way of knowing whether you’re dealing with invalid code points, kludgey shift characters, or some ASCII extension that was used for a meaningful purpose.

Now I’ve seen everything…

The president and CEO of OSEHRA recently posted the following announcement:

The Department of Veterans Affairs yesterday announced a decision to select a new electronic health record system based on the same platform that DoD purchased a couple of years ago. The announcement recognizes many unique needs of VA that differ from the DoD. VA would thus not be implementing an identical EHR. VA is trying to create a future health IT ecosystem that takes advantage of previous investments with this new platform, as well connections with many other IT systems in the private sector. The industry trend toward open platforms, open APIs, and open source software is expected to remain integral to VA’s strategy to build a new and interoperable ecosystem. OSEHRA provides a valuable link joining VA to the broad health IT community. This activity will remain critical to the success of VA’s transition strategy by eliminating future gaps and conflicts in an ever more complex ecosystem. Transition to a new EHR system will require years of efforts and in-depth expertise in VistA that currently resides mostly in the OSEHRA community. Innovations in health IT such as cloud-based implementations, analytics, clinical decision support systems, community-based care, and connected health will come from domains external to traditional EHR systems. Recent VA investments in eHMP and DHP are examples of open source innovations external to traditional EHRs, and they are expected to evolve as new platforms within the VA’s emerging health IT ecosystem.

Seong K, Mun, PhD
President and CEO

I suppose if we have our heads in such a place where the sun doesn’t reach, we can pretend that the VA’s adoption of a proprietary EHR is somehow a victory for open source.

I suppose, however, that I shouldn’t be surprised, considering that OSEHRA is just a dog-and-pony show to allow the government to pretend that it supports open source while doing exactly the opposite.

It helps little that large and critical components of eHMP–which is admittedly an extremely impressive project–aren’t even published in OSEHRA’s code-in-flight releases.

In the sand hast thou buried thine own heads, OSEHRA. An ally you are not.

Hasta la VistA, Baby!


This article implies that VA dropping VistA would be good for VistA. This makes the assumption that the extra-governmental VistA community and private vendors (like MedSphere and DSS) would step in to fill the void left by VA’s departure from VistA development. If, instead, this community continues to expect salvation from within the VA bureaucracy, VistA will die.

Also, please remember that I do not in any way fault individual VA developers for the bumbling mismanagement of the product.

It brings me no joy to express the grim reality, but I believe that at least someone needs to speak the difficult truth: politicians have never been friendly to VistA, government cannot effectively manage software projects, and the only bright path forward for VistA is to get it out of the hands of corrupt government cronies like Shulkin.

I’m not going to wring my hands today.

Instead, I’d like to extend my sincerest good wishes to Secretary Shulkin and his team as they embark upon what is sure to be a long and difficult transition to the Cerner EHR. I really do hope it works out for them.

I’m also hardly able to contain my excitement for what this could mean for the future of VistA. Provided the VA stays the course with this plan, its future has never been brighter.

The VA has been trying to get out of software development for years, and has had VistA limping along on life support the whole time. Outside, private-sector vendors have been understandably hesitant to make major changes to the VistA codebase, because they haven’t wanted to break compatibility with the VA’s patch stream. But now, there’s a chance that the patch stream will dry up, along with the stream of bad code, infected with the virus of Cache ObjectScript, and the VA’s marked indifference towards fixing structural problems with core modules like Kernel and FileMan. The VA always hated VistA, and they were atrociously incompetent custodians of it, from the moment it emerged from the rather offensively-named “underground railroad”. They suck at software development, so they should get out of that business and let the open source community take the reins.

This is not to say that there weren’t or aren’t good programmers at the VA: far from it, but VA’s bumbling, incompetent, top-heavy management bureaucracy forever hobbled their best programmers’ best intentions. And let’s be real: had Secretary Shulkin announced that VA was keeping VistA, it would be status quo, business-as-usual. VistA would still be VA’s redheaded stepchild, and the bitrot already plaguing it would get even worse. There was never the tiniest chance that the VA would wake up and start managing VistA well, much less innovating with it. And even if this Cerner migration fails (which is not at all unlikely), there will never be such a chance. Its successes stem entirely from its origins as an unauthorized, underground skunkworks project by those great VistA pioneers who courageously thumbed their noses at bureaucratic stupidity. VistA only ever succeeded in spite of the VA; not because of it.

But, what about patient care? Won’t it get worse as a result of dropping such a highly-rated EHR?

Worse than what? VA sucks at that too, and always has. Long waiting lists, poor quality of care, bad outcomes, scheduling fraud, skyrocketing veteran suicides: none of this is related in any way to VAs technology, for better or worse. It’s just that pouring money into IT changes is a quick way for a bureaucrat with a maximal career span far too short to affect any real change to appear that they’re doing something. When IT projects fail, they can dump it in their successors’ laps, or blame the contractor, and go upon their merry way visiting fraud, waste, and abuse upon the taxpayer, while those who committed to making the ultimate sacrifice in service of king and country are left wondering why it still takes them months just to be seen.

So I sincerely do wish the VA the best of luck in its witless endeavor, and hope that they succeed, by whatever comical measure of success their bumbling allows. Hopefully, this will open the door for the open-source community to take the awesomeness that is VistA and bring it forward into a brighter and happier future.

Feel free to join me. Virtual popcorn and soda is free.

The Problem With Package Managers

As Linux moves farther away from its UNIX roots, and more towards being yet another appliance for the drooling masses (the same drooling masses who just five years ago couldn’t grok the difference between a CD-ROM tray and a cup holder), our once great proliferation of usable choices has dwindled due to a tendency on the part of developers to target only Debian- or Red Hat-based distributions, with a strong bias towards Ubuntu on the Debian side, while few of the more generous developers will also target SuSE, and even fewer will distribute software as a distribution-agnostic tarball. This situation leaves users of other distributions in a precarious position, especially in the case of those of us who–like the author of this article–believe that systemd is a baroque, labyrinthine monument to bogosity (how Lennart Poettering manages to get hired by any reputable software development firm is an atrocity that boggles the mind–his other big “hit” is a three-coil, peanut-laden steamer of a solution-looking-for-a-problem called PulseAudio), and would seek one of the increasingly rare sysvinit based distributions to get away from it.

This is a problem mostly due to package managers. If you’re on a Debian-based system, you get apt. Red Hat, yum. SuSE, zypper. These utilities should need no introduction, and are often praised by Linux users: a single command will install a package and all of its required shared libraries and dependencies, and another command will upgrade packages to the latest and greatest versions, all from a centralized, cloud-based repository or list of repositories. They do provide some convenience, but at a cost: the days of reliably being able to find a simple tarball that will work with the incantation of ./configure; make; make install seem to be numbered. This was a nice, cross-platform solution, and had the added benefit of producing binaries that were well-optimized for your particular machine.

One bright light in all this darkness is the pkgsrc tool in NetBSD: you check out a full source tree from a CVS repository, and this creates a directory structure of categories (editors, databases, utilities, etc.) into which are further subdirectories representing packages. All you need to do is descend into the desired subdirectory and type an appropriate make incantation to download the package and its dependencies, build them, and install them to your system. Updates are similar: fetch the latest updates from the CVS repo, and repeat the process.

However, not even pkgsrc has solved the other big problem with most package managers, and that is the politics of getting new packages into the repositories. The Node.js package manager, npm, is the only one that does this correctly (in the FOSS sense) in any way: you go to the npmjs.org website, create an account, choose a package name (and hope it hasn’t already been taken by another developer), and you are in charge of that little corner of the npm world. You manage your dependencies, your release schedule, your version scheme, the whole nine yards. With Linux distributions, it seems that only a blood sacrifice to the gatekeepers will allow you to contribute your own packages, and even when you get past their arcane requirements, it is still a mass of red tape just to publish patches and updated versions of your software. Node.js, for instance, has not been updated in the mainline distribution repositories since v0.10, which is by all measures an antique.

In order to meet my standards, there are three solutions, that should be employed together:

  • Publicly and brutally shame developers who release only deb and rpm packages but no ./configure; make; make install tarball until they are so insecure that they cry into their chocolate milk and do the right thing (or strengthen the developer gene pool by quitting altogether and opting for a job wiping viruses for drooling PC users with The Geek Squad)
  • Push the Linux distributions to abandon the brain-dead cathedral approach to repo management and opt for a more bazaar-like egalitarian approach like npm
  • Make countless, humiliating memes of Lennart Poettering in embarrassing and compromising contexts (this bit is more for the health of UNIX as a whole than for package managers, but it’s the duty of every good UNIX citizen)