Lucee Installation on Solaris 11 (SPARC)

Here, we will look at the process of installing the Lucee CFML application server under Apache Tomcat 8 on Oracle Solaris 11 on SPARC machines. We will also set up an Apache proxy with mod_cfml to ease the process of deploying Apache VirtualHosts without having to configure each virtual host twice (in Apache itself, as well as Tomcat’s server.xml file).

My Setup

  • The server is a SunFire T2000 with an UltraSPARC-T1 CPU. Great performance for Java workloads like this.
  • I’m running Solaris 11.3, as 11.4 does not support the UltraSPARC-T1 CPU.
  • I’m running this setup in a Solaris non-global zone.

Prerequisites

You will need to install the following packages from the “solaris” Oracle IPS repository:

  • apache-22 (as of this writing, Apache 2.4 segfaults when using mod_proxy)
  • tomcat-8
  • gnu-make
  • gnu-sed

You will also need Oracle Solaris Studio 12.4, with its bin directory in your $PATH (ahead of any other path containing the “cc” binary), in order to get a C compiler that will take the options that Apache’s apxs module tool will attempt to use. This is a free download to those with an Oracle Technology Network (OTN) account, so you will need one of those too. As of this writing, OTN accounts are also free.

Newer versions of Oracle Solaris Studio/Oracle Developer Studio require a patch on UltraSPARC-T1 machines (Solaris SRU 20) to enable the VIS hardware capability. You must have a valid Oracle Solaris support contract to obtain this patch.

Please consult the appropriate Solaris documentation for how to install packages.

I have not tested this procedure on x86, or Solaris 11.4. If using versions of Solaris other than 11.3, or a different CPU architecture, your mileage may vary.

Installing Lucee Application Server

Steps

  1. Download the latest Lucee JAR file from https://download.lucee.org/
  2. Stop the Tomcat servlet container
  3. Place the downloaded JAR file into the Tomcat 8 lib directory
  4. Add Lucee’s servlet and mapping configuration to Tomcat web.xml
  5. Restart Tomcat
  6. Make sure Tomcat is running

Details

Downloading Lucee JAR File

Using your web browser, download the latest Lucee JAR file from https://download.lucee.org, and transfer it to your Solaris server (if not downloading directly from the server itself).

Stop Tomcat

svcadm disable svc:/network/http:tomcat8

Place JAR In Tomcat Lib Directory

cp /path/to/lucee-n.n.n.n.jar /usr/tomcat8/lib

Configure Lucee Servlet and Mappings

Open /var/tomcat8/conf/web.xml for editing, and add the following lines to the bottom of the file:

<servlet>
 <description>Lucee CFML Engine</description>
 <servlet-name>CFMLServlet</servlet-name>
 <servlet-class>lucee.loader.servlet.CFMLServlet</servlet-class>
 <load-on-startup>1</load-on-startup>
</servlet>

<servlet>
 <description>Lucee Servlet for RESTful services</description>
 <servlet-name>RestServlet</servlet-name>
 <servlet-class>lucee.loader.servlet.RestServlet</servlet-class>
 <load-on-startup>2</load-on-startup>
</servlet>

<servlet-mapping>
 <servlet-name>CFMLServlet</servlet-name>
 <url-pattern>*.cfm</url-pattern>
 <url-pattern>*.cfml</url-pattern>
 <url-pattern>*.cfc</url-pattern>
 <url-pattern>/index.cfc/*</url-pattern>
 <url-pattern>/index.cfm/*</url-pattern>
 <url-pattern>/index.cfml/*</url-pattern>
</servlet-mapping>

<servlet-mapping>
  <servlet-name>RestServlet</servlet-name>
  <url-pattern>/rest/*</url-pattern>
</servlet-mapping>

Start Tomcat

svcadm enable svc:/network/http:tomcat8

Make Sure Tomcat is Running

Type the following command:

svcs | grep tomcat8

You should see output similar to the following:

online         20:52:13 svc:/network/http:tomcat8

If the first column of output does not say online, check /var/tomcat8/logs/catalina.out for any error messages, and check /var/tomcat8/conf/web.xml for syntax errors and/or omissions before running svcadm restart svc:/network/http:tomcat8 again.

Configure Apache 2.2 Proxy

Steps

  1. Create /etc/apache2/2.2/conf.d/lucee_proxy.conf
  2. Restart the Apache SMF service
  3. Make sure Apache is running

Details

Create Proxy Configuration

In your favorite editor, open /etc/apache2/2.2/conf.d/lucee_proxy.conf, and place the following lines into it:

<IfModule mod_proxy.c>
 <Proxy *>
  Allow from 127.0.0.1
 </Proxy>

 ProxyPreserveHost On
 ProxyPassMatch ^/(.+\.cf[cm])(/.*)?$ ajp://127.0.0.1:8009/$1$2
 ProxyPassMatch ^/(.+\.cfchart)(/.*)?$ ajp://127.0.0.1:8009/$1$2
 ProxyPassMatch ^/(.+\.cfml)(/.*)?$ ajp://127.0.0.1:8009/$1$2
 ProxyPassReverse / ajp://127.0.0.1:8009/
</IfModule>

Restart Apache SMF Service

Type the following command into your terminal:

svcadm restart svc:/network/http:apache22

Make Sure Apache is Running

Type the following command:

svcs | grep apache22

You should see output similar to the following:

online         20:48:12 svc:/network/http:apache22

If the first column of output does not show online, check /etc/apache2/2.2/conf.d/lucee_proxy.conf for syntax errors and restart Apache 2.2 once more.

Build mod_cfml From Source

The mod_cfml project’s GitHub sources contain three invalid characters at the very beginning of mod_cfml.c. I have created an archive of the corrected sources, which is available at http://ftp.coherent-logic.com/pub/solaris/lucee/mod_cfml/mod_cfml-solaris-source.tar.gz (also available via FTP).

You will need to download this file to your Solaris server, and unarchive it with the following command:

tar zxf mod_cfml-solaris-source.tar.gz

This will leave a mod_cfml-master subdirectory in the place where you un-tarred the archive.

Navigate to mod_cfml-master/C, and type the following commands:

export PATH=/opt/solarisstudio12.4/bin:$PATH
APXS=/usr/apache2/2.2/bin/apxs gmake
APXS=/usr/apache2/2.2/bin/apxs gmake install

This will build mod_cfml.so and install it into /usr/apache2/2.2/libexec.

If you get errors during the build process, make sure you have typed all the above commands correctly, and that you have the gnu-make, gnu-sed, and Oracle Solaris Studio 12.4 correctly installed.

Configure Apache with mod_cfml

Steps

  1. Add mod_cfml configuration to /etc/apache2/2.2/httpd.conf
  2. Restart Apache 2.2
  3. Make sure Apache 2.2 is running

Details

Add mod_cfml Configuration to Apache 2.2

In your preferred editor, open /etc/apache2/2.2/httpd.conf and add the following lines to the bottom:

LoadModule modcfml_module libexec/mod_cfml.so
CFMLHandlers ".cfm .cfc .cfml"
LogHeaders false
LogHandlers false

Restart Apache 2.2

Type the following command:

svcadm restart svc:/network/http:apache22

Make Sure Apache 2.2 is Running

Type the following command:

svcs | grep apache22

You should see output similar to the following:

online         20:48:12 svc:/network/http:apache22

If the first column of output does not contain online, check /etc/apache2/2.2/httpd.conf to make sure you did not make any typos while adding the mod_cfml configuration lines to it, and restart Apache 2.2 once again.

Configure mod_cfml Valve in Tomcat 8

Steps

  1. Copy mod_cfml-valve_v1.1.05.jar to /usr/tomcat8/lib/
  2. Add valve configuration to /var/tomcat8/conf/server.xml
  3. Restart Tomcat 8 SMF service
  4. Make sure Apache 2.2 and Tomcat 8 are running

Details

Copy mod_cfml Valve JAR to Tomcat 8 Library Directory

You will need to now navigate to the mod_cfml-master/java directory created when you untarred mod_cfml in a prior step. Within that directory, type the following command:

cp mod_cfml-valve_v1.1.05.jar /usr/tomcat8/lib/

Add mod_cfml Valve Configuration to Tomcat 8

Open /var/tomcat8/conf/server.xml for editing, and find the <Host> entry that begins with the following:

<Host name="localhost"  appBase="webapps"
      unpackWARs="true" autoDeploy="true">

After the <Host> tag, but before </host>add the following lines of XML code:

<Valve
      className="mod_cfml.core"
      loggingEnabled="false"
      maxContexts="100"
      timeBetweenContexts="0"
      scanClassPaths="false" />

This will enable the mod_cfml valve for localhost.

Restart Tomcat 8 SMF Service

Type the following command:

svcadm restart svc:/network/http:tomcat8

Make Sure Tomcat 8 and Apache 22 are Running

Type the following command:

svcs | egrep "tomcat8|apache22"

You should see output similar to the following:

online         20:48:12 svc:/network/http:apache22
online         20:52:13 svc:/network/http:tomcat8

If the first columns of output don’t both read online, check /var/tomcat8/conf/server.xml for any inadvertent typos or syntax errors.

You should now be running Lucee successfully! In your favorite web browser, navigate to http://hostname/lucee/admin/server.cfm. If everything worked correctly, you should see the Lucee Server Administrator. Please set secure passwords for both the Server Administrator (global to the entire server), and the Web Administrator for each Lucee web context you create.

As you are now running mod_cfml, any VirtualHost entries you create in Apache will be set up for you automatically in Tomcat, without having to edit and maintain two sets of messy XML configuration files.

Enjoy!

 

Advertisements

I Am The Anti-Web: Part 1

This multi-part series will explore the reasons why the modern World Wide Web and its ill-designed suite of languages, protocols, and ecosystem are the single most harmful development in the entire history of computing. Within it, we will make every effort to bring down its technologies, its proponents, and the false economies it has engendered. No effort will be wasted on attempting to justify it, nor to show charity to those involved.

ElectronJS

My desktop computer has the following specs:

  • (2) Intel Xeon E5645 6-core, 12-thread processors at 2.4GHz
  • 24GB of PC2100 DDR ECC RAM
  • NVIDIA GeForce GTX-1080 Founders Edition GPU
  • (2) 240GB 6g/s SATA SSDs in RAID0 (OS and apps)
  • (4) 2TB 10,000RPM 6g/s SATA HDDs (data)
  • Debian 8, running the latest proprietary NVIDIA graphics drivers
  • Windows 7 Professional is available by way of a dual-boot configuration, though this is very rarely employed

The desktop application for Slack, the popular messaging and collaboration platform, takes 13.35 seconds on my machine to switch workspaces and render its basic view.

The reason for this is that the Slack desktop application is not a native application at all. It is a JavaScript and HTML 5 application that targets the Electron framework, which allows web developers to produce desktop-style applications that run on Windows, macOS, and Linux. Discord and Skype are also built upon the same technology, which bundles the Chromium browser and its V8 JavaScript environment into application packages, and allows JavaScript to access underlying operating system services.

Evil corporations love this technology, as the proliferation of code monkeys adept at copying and pasting from W3Schools and Stack Exchange makes labor cheap (at least on the surface–technical debt from this generation of irresponsible “developers” is likely to be higher than anything we’ve ever seen), and they can target all three major platforms from a single codebase. With a sufficiently large army of marketing drones, a lack of alternatives, these companies have brainwashed their users into believing that an application which displays a spinning progress indicator for more than ten seconds, just to render its basic view, is an acceptable user experience.

Look! We can chase our own tails for 13.35 seconds!

My first computer, having a 4.77MHz 8088 CPU and 512KB of RAM, could repaginate an entire WordStar document, or recalc an entire Lotus 1-2-3 spreadsheet in this much time or less, and the basic shell of the application views were rendered in sub-second timeframes. A modern native application (one written in a real programming language, using real operating system APIs), with all the flashy UI chrome and graphics demonstrates the same level of performance.

In the early to mid 1990s, developers attempting to use Visual Basic for commercial applications were ridiculed and told to go learn C++ or even Pascal, because VB (until version 5) was a threaded p-code implementation, rather than a true compiled language, and performance thus suffered. But, even the worst-performing Visual Basic application can render its views much, much faster than any Electron application, while running on a 16MHz 386SX with no FPU!

Woke AF

I suppose that the culture of the day is to blame, as the majority of modern web “developers” are crunchy hipster trend-slaves, sitting in front of their MacBooks at Starbucks, sipping on their half-caf no-whip skinny kombucha soy abominations and repeating argumentum ad populum to themselves until they believe that everything that’s new must be true, while changing technology stacks faster than Taylor Swift changes boyfriends.

Got a long list of ex-lovers, they’ll tell you I’m insane…

Much of this is just bad economics: the Silicon Valley modus operandi is to come up with an idea (synergizing), beat the soul out of it in focus groups (market research), get Vulture Capitalist funding (where a simple equity position means “we’ll take the equity, you assume the position”), release the most minimally-functional, poor-performing pile of slop you can (rapid iteration), sell it to a greedy and evil Fortune 500 (here’s your millions, now, give us your soul), take your money, and go do something else. There is no desire in this shitfest shark-tank of corporatism run amok to actually build good products or build new companies. It’s a one-night stand, wham-bam-thank-you-ma’am, entrepreneurial trainwreck.

And, the developers aren’t even leaving bus fare on the nightstand for their hapless users.

We must do better.

Prefiniti: Architecture and Updates

The old Prefiniti codebase (WebWare.CL and Prefiniti 1.0/1.5/1.6) was bleeding-edge at the time of its original implementation (circa 2007-2009), as it used a technique called AJAX (Asynchronous Javascript And XML), which allowed all navigation operations within the site to load only the parts of the page that were to be changed.

Essentially, Prefiniti implemented what today would be called a “container/fragment” approach, where a single container page’s DOM contains “div” elements with a specific ID attribute into which “fragment” pages would be loaded. In the case of Prefiniti, the container pages were called webwareBase.cfm, appBase.cfm, Prefiniti-Steel-1024×768.cfm, or prefiniti_framework_base.cfm (depending on which Prefiniti version we’re discussing). What all of these container pages have in common is a pair of HTML div elements called sbTarget and tcTarget, which stand for “sidebar target” and “time collection target”, respectively. sbTarget is normally a left-hand navigation sidebar containing an accordion control, while tcTarget is the main element to which application content is loaded and rendered. It is so named because the time collection component of Prefiniti was the first to use AJAX techniques.

There is a utility function written in JavaScript, called AjaxLoadPageToDiv(), which would take as arguments the ID attribute of a DOM element, and a URL which would be loaded into and rendered within that DOM element. If the DOM element was tcTargetAjaxLoadPageToDiv() would look within the loaded document for XML tags wwafcomponent, wwafsidebar, wwwafdefinesmap, wwwafpackage, and wwaficon. These tags (where wwaf stands for WebWare Application Framework) would determine the component name, contextual sidebar, package name, and icon of the content being loaded, and trigger a recursive load of the appropriate sidebar fragment into sbTarget.

The difficulty with this approach arose from the legacy of the application: the direct predecessor of WebWare.CL/Prefiniti was a simple order form for customers to order land surveys from a local surveying firm, Center Line Services. This original application did not use AJAX at all, and employed some legacy techniques in its use of server-side rendering, which I’ll explain here:

Prefiniti is implemented in a programming language and application server known as ColdFusion. Upon receiving an HTTP request for a ColdFusion template, which is denoted by a .cfm file extension, ColdFusion looks in the current directory for a file called Application.cfm, which it will run and render prior to the requested template. Application.cfm’s job is to set up session variables, application timeouts, cookies, etc. for things like user authentication and maintaining application state. If Application.cfm is not found in the same directory as the requested template, ColdFusion will traverse all parent directories up to the site’s document root until it finds one. Once Application.cfm is run and rendered, ColdFusion will run and render the template that was requested, and then look for OnRequestEnd.cfm (using the same directory traversal rules as used by Application.cfm), and run and render it.

This is not a bad technique, except that the original application on which WebWare.CL/Prefiniti was based used Application.cfm to render DOCTYPE, html, head, and body elements, along with a site header, navigation menubar, and a toolbar, and OnRequestEnd.cfm would close these tags, while any requested template would fill in the rest of the page body as appropriate.

The problem with this manifested when AjaxLoadPageToDiv() would request a fragment to be loaded into tcTarget and sbTarget, the fragment also being a ColdFusion template. Application.cfm would be processed in the normal way, and the header, navbar, and toolbar–which was only supposed to exist at the top of the page, above the sbTarget and tcTarget div elements–would be repeated within both sbTarget and tcTarget.

At this point in the application’s development, Application.cfm had grown tremendously complex, and I, as a relatively green CF developer, couldn’t figure out how to move the visual content out of it and into the container template (webwareBase.cfm et. al.) in order to fix the problem correctly. My solution at the time was to place fragments into subdirectories (tc, workFlow, socialnet, businessnet, etc.) of the document root, each subdirectory having an empty Application.cfm file within it, to prevent rendering of the parent Application.cfm within sbTarget and tcTarget. This worked, except that page fragments no longer had access to any session state, including the ID of the currently logged-in user.

My solution to this problem was to generate JavaScript on the server-side that would create front-end JS variables for each needed session variable, and have that JS code run when the application’s login form was submitted, and have AjaxLoadPageToDiv() pass all of those variables to fragment pages as part of the HTTP query string. This meant that all form submissions required custom JavaScript to build a GET request that would collect form fields’ values and submit them to the back-end, which is a horrible abuse of GET (the HTTP standards require that such submissions be POSTed instead, placing the form fields within the body of the request, rather than in the URL). It also meant that session timeouts were handled poorly, security problems were many, and adding new features to the application was complex and difficult, requiring a great deal of JavaScript code that bloated the initial load of the application to unreal proportions.

In the current re-factor of Prefiniti, these problems have nearly all been mitigated. Visual rendering has all been moved out of Application.cfm and into prefiniti_framework_base.cfm, the empty Application.cfm templates in the application subdirectories (tc, workFlow, socialnet, etc.), have all been removed, and page fragment templates now have full access to session state. The process to strip out dependencies on GET requests and huge query strings is in progress, and most of the JavaScript bloat will thus be easy to remove, future-proofing the application and making it secure, and much easier to maintain and extend. This also has the benefit that the server-side modules for core framework functionality and database I/O can be loaded once for the entire application and made available to page fragments with no additional effort.

UI updates are also on the way, by way of Bootstrap 4, making Prefiniti a modern, responsive, and mobile-ready platform for web applications.

Here’s to the future!

Why UTF-8 is a train wreck (or: UNIX Doesn’t Represent Everyone)

This post won’t go into the gory details of Unicode or the UTF-8 encoding. That ground has been covered better elsewhere than I could ever hope to here. What we’re looking at today is almost as much political as technical, although technical decisions play a huge part in the tragedy. What I am positing today is that UTF-8–for all its lofty compatibility goals–fails miserably in the realm of actual, meaningful compatibility.

The supposed brilliance of UTF-8 is that its code points numbered 0-127 are entirely compatible with 7-bit ASCII, so that a data stream containing purely ASCII data will never need more than one byte per encoded character. This is all well and good, but the problem is that aside from UNIX and its derivatives, the vast majority of ASCII-capable hardware and software made heavy use of the high-order bit, specifying characters for code points 128-255. However, the UTF-8 encoding either chokes on or specifies control characteristics using the high-order bit, to include encoding whether or not the character specified will require a second byte.  This makes 7-bit ASCII (as well as encodings touting 7-bit ASCII compatibility) little more than a mental exercise for most systems: like it or not, the standard for end-user systems was set by x86 PCs and MS-DOS, not UNIX, and MS-DOS and its derivatives make heavy use of the high-order bit. UNIX maintained 7-bit purity in most implementations, as mandated by its own portability goals, and UTF-8’s ultimate specifications were coded up on a New Jersey diner placemat by Ken Thompson, the inventor of UNIX, and Rob Pike, one of its earliest and most prolific contributors. UTF-8 effectively solved the problem for most UNIX systems, which were pure 7-bit systems from the beginning. But why should UTF-8’s massive shortcomings have been foisted upon everyone else, as if UNIX–like many of its proponents–was some playground bully, shoving its supposed superiority down everyone else’s throats?

It should not. The UNIX philosophy, like functional programming, microkernels, role-based access control, and RISC, has its merits, but it is not the only kid on the block, and solutions like UTF-8 that just happen to work well in UNIX shouldn’t be forced upon environments where they only break things. Better to make a clean break to a sane, fixed-width encoding like UTF-32, perhaps providing runtimes for both ASCII (including its 8-bit extensions) and the new encoding to allow software to be ported to use it piecemeal. At least with something like UTF-32, data from other encodings can be programmatically converted to it, whereas with UTF-8 with its two-bit 8th-bit meddling, there’s no way of knowing whether you’re dealing with invalid code points, kludgey shift characters, or some ASCII extension that was used for a meaningful purpose.

Now I’ve seen everything…

The president and CEO of OSEHRA recently posted the following announcement:

The Department of Veterans Affairs yesterday announced a decision to select a new electronic health record system based on the same platform that DoD purchased a couple of years ago. The announcement recognizes many unique needs of VA that differ from the DoD. VA would thus not be implementing an identical EHR. VA is trying to create a future health IT ecosystem that takes advantage of previous investments with this new platform, as well connections with many other IT systems in the private sector. The industry trend toward open platforms, open APIs, and open source software is expected to remain integral to VA’s strategy to build a new and interoperable ecosystem. OSEHRA provides a valuable link joining VA to the broad health IT community. This activity will remain critical to the success of VA’s transition strategy by eliminating future gaps and conflicts in an ever more complex ecosystem. Transition to a new EHR system will require years of efforts and in-depth expertise in VistA that currently resides mostly in the OSEHRA community. Innovations in health IT such as cloud-based implementations, analytics, clinical decision support systems, community-based care, and connected health will come from domains external to traditional EHR systems. Recent VA investments in eHMP and DHP are examples of open source innovations external to traditional EHRs, and they are expected to evolve as new platforms within the VA’s emerging health IT ecosystem.

Seong K, Mun, PhD
President and CEO

I suppose if we have our heads in such a place where the sun doesn’t reach, we can pretend that the VA’s adoption of a proprietary EHR is somehow a victory for open source.

I suppose, however, that I shouldn’t be surprised, considering that OSEHRA is just a dog-and-pony show to allow the government to pretend that it supports open source while doing exactly the opposite.

It helps little that large and critical components of eHMP–which is admittedly an extremely impressive project–aren’t even published in OSEHRA’s code-in-flight releases.

In the sand hast thou buried thine own heads, OSEHRA. An ally you are not.

Hasta la VistA, Baby!


UPDATE

This article implies that VA dropping VistA would be good for VistA. This makes the assumption that the extra-governmental VistA community and private vendors (like MedSphere and DSS) would step in to fill the void left by VA’s departure from VistA development. If, instead, this community continues to expect salvation from within the VA bureaucracy, VistA will die.

Also, please remember that I do not in any way fault individual VA developers for the bumbling mismanagement of the product.

It brings me no joy to express the grim reality, but I believe that at least someone needs to speak the difficult truth: politicians have never been friendly to VistA, government cannot effectively manage software projects, and the only bright path forward for VistA is to get it out of the hands of corrupt government cronies like Shulkin.


I’m not going to wring my hands today.

Instead, I’d like to extend my sincerest good wishes to Secretary Shulkin and his team as they embark upon what is sure to be a long and difficult transition to the Cerner EHR. I really do hope it works out for them.

I’m also hardly able to contain my excitement for what this could mean for the future of VistA. Provided the VA stays the course with this plan, its future has never been brighter.

The VA has been trying to get out of software development for years, and has had VistA limping along on life support the whole time. Outside, private-sector vendors have been understandably hesitant to make major changes to the VistA codebase, because they haven’t wanted to break compatibility with the VA’s patch stream. But now, there’s a chance that the patch stream will dry up, along with the stream of bad code, infected with the virus of Cache ObjectScript, and the VA’s marked indifference towards fixing structural problems with core modules like Kernel and FileMan. The VA always hated VistA, and they were atrociously incompetent custodians of it, from the moment it emerged from the rather offensively-named “underground railroad”. They suck at software development, so they should get out of that business and let the open source community take the reins.

This is not to say that there weren’t or aren’t good programmers at the VA: far from it, but VA’s bumbling, incompetent, top-heavy management bureaucracy forever hobbled their best programmers’ best intentions. And let’s be real: had Secretary Shulkin announced that VA was keeping VistA, it would be status quo, business-as-usual. VistA would still be VA’s redheaded stepchild, and the bitrot already plaguing it would get even worse. There was never the tiniest chance that the VA would wake up and start managing VistA well, much less innovating with it. And even if this Cerner migration fails (which is not at all unlikely), there will never be such a chance. Its successes stem entirely from its origins as an unauthorized, underground skunkworks project by those great VistA pioneers who courageously thumbed their noses at bureaucratic stupidity. VistA only ever succeeded in spite of the VA; not because of it.

But, what about patient care? Won’t it get worse as a result of dropping such a highly-rated EHR?

Worse than what? VA sucks at that too, and always has. Long waiting lists, poor quality of care, bad outcomes, scheduling fraud, skyrocketing veteran suicides: none of this is related in any way to VAs technology, for better or worse. It’s just that pouring money into IT changes is a quick way for a bureaucrat with a maximal career span far too short to affect any real change to appear that they’re doing something. When IT projects fail, they can dump it in their successors’ laps, or blame the contractor, and go upon their merry way visiting fraud, waste, and abuse upon the taxpayer, while those who committed to making the ultimate sacrifice in service of king and country are left wondering why it still takes them months just to be seen.

So I sincerely do wish the VA the best of luck in its witless endeavor, and hope that they succeed, by whatever comical measure of success their bumbling allows. Hopefully, this will open the door for the open-source community to take the awesomeness that is VistA and bring it forward into a brighter and happier future.

Feel free to join me. Virtual popcorn and soda is free.

The Problem With Package Managers

As Linux moves farther away from its UNIX roots, and more towards being yet another appliance for the drooling masses (the same drooling masses who just five years ago couldn’t grok the difference between a CD-ROM tray and a cup holder), our once great proliferation of usable choices has dwindled due to a tendency on the part of developers to target only Debian- or Red Hat-based distributions, with a strong bias towards Ubuntu on the Debian side, while few of the more generous developers will also target SuSE, and even fewer will distribute software as a distribution-agnostic tarball. This situation leaves users of other distributions in a precarious position, especially in the case of those of us who–like the author of this article–believe that systemd is a baroque, labyrinthine monument to bogosity (how Lennart Poettering manages to get hired by any reputable software development firm is an atrocity that boggles the mind–his other big “hit” is a three-coil, peanut-laden steamer of a solution-looking-for-a-problem called PulseAudio), and would seek one of the increasingly rare sysvinit based distributions to get away from it.

This is a problem mostly due to package managers. If you’re on a Debian-based system, you get apt. Red Hat, yum. SuSE, zypper. These utilities should need no introduction, and are often praised by Linux users: a single command will install a package and all of its required shared libraries and dependencies, and another command will upgrade packages to the latest and greatest versions, all from a centralized, cloud-based repository or list of repositories. They do provide some convenience, but at a cost: the days of reliably being able to find a simple tarball that will work with the incantation of ./configure; make; make install seem to be numbered. This was a nice, cross-platform solution, and had the added benefit of producing binaries that were well-optimized for your particular machine.

One bright light in all this darkness is the pkgsrc tool in NetBSD: you check out a full source tree from a CVS repository, and this creates a directory structure of categories (editors, databases, utilities, etc.) into which are further subdirectories representing packages. All you need to do is descend into the desired subdirectory and type an appropriate make incantation to download the package and its dependencies, build them, and install them to your system. Updates are similar: fetch the latest updates from the CVS repo, and repeat the process.

However, not even pkgsrc has solved the other big problem with most package managers, and that is the politics of getting new packages into the repositories. The Node.js package manager, npm, is the only one that does this correctly (in the FOSS sense) in any way: you go to the npmjs.org website, create an account, choose a package name (and hope it hasn’t already been taken by another developer), and you are in charge of that little corner of the npm world. You manage your dependencies, your release schedule, your version scheme, the whole nine yards. With Linux distributions, it seems that only a blood sacrifice to the gatekeepers will allow you to contribute your own packages, and even when you get past their arcane requirements, it is still a mass of red tape just to publish patches and updated versions of your software. Node.js, for instance, has not been updated in the mainline distribution repositories since v0.10, which is by all measures an antique.

In order to meet my standards, there are three solutions, that should be employed together:

  • Publicly and brutally shame developers who release only deb and rpm packages but no ./configure; make; make install tarball until they are so insecure that they cry into their chocolate milk and do the right thing (or strengthen the developer gene pool by quitting altogether and opting for a job wiping viruses for drooling PC users with The Geek Squad)
  • Push the Linux distributions to abandon the brain-dead cathedral approach to repo management and opt for a more bazaar-like egalitarian approach like npm
  • Make countless, humiliating memes of Lennart Poettering in embarrassing and compromising contexts (this bit is more for the health of UNIX as a whole than for package managers, but it’s the duty of every good UNIX citizen)