Maintaining Infrastructure for Industry Health

Letter From the Executive Director, May 2020

There is an adage about infrastructure that no one cares about it until it fails.  We look around and see a tremendous amount of physical infrastructure, from roads to power lines, from storm drains to the plumbing in our house. These elements of our built environment are included in advance of the finishing touches that we all appreciate. Long before the houses are built in a new neighborhood, the water pipes and natural gas lines are placed underground. The plumbing and wiring are installed before the drywall is added. We presume that all of this work is done well, and for the first decade or first several decades, the infrastructure is usually ignored. Slowly over time, though, these systems start to deteriorate, to break down. The new power demands of modern life overtax the electrical system of a house built in the 1930s. Galvanized steel pipes begin to corrode and become blocked or leak. Anyone whose house is older than 30 years old  recognizes some of these issues, and the older the house, the more acute these issues can become.

What is even more invisible to people is our digital infrastructure. And yet the causes, implications, and eventual problems are no less severe or less noticeable when they fail. Over the past month, the unemployment rate in the U.S. has spiked, with more than 30 million Americans filing for unemployment benefits; and it is poised to go even higher. This is a horrific situation for the country, and untold misery underlies this crisis. As this flood of people have sought out support through government benefits, they have hit an unexpected barrier: the information systems that are supposed to process those benefits. Many of the systems that manage unemployment benefits—indeed, many government IT systems—were put into place decades ago and haven’t been upgraded. It is fair to ask whether even the most modern system could have withstood the torrent of activity brought on by the current pandemic. But the fact that systems more than 40 years old failed to withstand the spike should surprise no one.

Anyone who works with legacy systems would recognize this situation. These underlying systems function well enough. They’re not ideal. There are problems, but decades worth of workarounds have been developed and implemented. They cause innumerable headaches for those that use them regularly, but outside the core support team, few have to deal with the problem regularly.

Unfortunately, the workforce that has deep knowledge of how those systems function is rapidly retiring, taking this knowledge with them. There have been several news stories recently about the lack of COBOL programmers, because, why would a newly minted programmer care at all about learning a 50+-year-old programming language? She would have grown up learning to code using Python or Ruby or JSON. Not surprisingly, these stories have been around for years. Many of them frame the problem as “How to get more people into coding COBOL?” The real question should be why we aren’t collectively investing in our government’s digital infrastructure to bring it up to modern standards, so that it isn’t still reliant  on outdated hardware and the programming language used with it.

Before we laughingly dismiss the government as not being able to get it together enough to do anything right, we should look at our own IT infrastructure and assess its own status. In publishing, many of the production and fulfillment systems that undergird the creation and distribution process are similarly long in the tooth. For example, the original ONIX for Books system, an XML-based messaging format for communicating information about books, was introduced 20 years ago, with version 2.0 coming out shortly thereafter in 2001. In 2009 EDITEUR introduced ONIX 3, a significant upgrade. EDITEUR sunsetted support for the 2.1 version in 2014, but it remains in wide use, and nearly a decade on, implementation of ONIX 3 continues to lag, particularly in the U.S. Why? Because the infrastructure necessary to support it isn’t in place? No. Version 2.1 is still functioning well enough for many to keep using it. Perhaps it isn’t functioning well, or isn’t able to meet current demands, but it hasn’t broken completely, and of all the areas where organization could spend its money, maintaining the old system is far cheaper than replacing it and upgrading the infrastructure to a more modern standard. And this is simply one example; there are many more.

In the library space, the same situation exists, and is perhaps even more troubling. The MARC record standard was first developed in the 1960s. It’s an extremely efficient method of sharing bibliographic information, because it had to be. When the MARC record format was first created, the benchmark computers available, the IBM 370 Series, could only muster a maximum of 512 KB of memory. Storage was expensive. Processing power was in short supply, and access was limited. You couldn’t spare memory for robust descriptions. This is why, among other things, standards for elements such as abbreviations were so important. In 1999 a new generation of MARC was developed and, in combination with MARCXML, moved bibliographic records out of the digital stone age. Even so, the basic underlying technology of library bibliographic records is still driven by technology that predates the NASA moon landing.

There have been ongoing developments, both in publishing as well as library systems. ONIX and MARC continue to evolve. The library community began work on a next generation of bibliographic exchange based on linked data. At the time, I predicted that in order to be adopted, the new model would need to be magnitudes better at one of the following things: saving people’s time; allowing for greater efficiency of workflow; supporting activities that were impossible beforehand; or being so cheap that moving to new systems would be nearly cost-free, thereby saving management time invested in the old systems. Eight years into that effort, systems deployments are tentative. Of course, new systems implementations, particularly with new baseline standards, are not simple. MARC-based systems didn’t suddenly appear in every library in 1972 after the standard was released. 

I don’t mean to call out these issues to say we are further behind or ahead of any other community or systems implementers. It is important that we reflect on the fact that we have collectively ignored the costly task of maintaining and improving our infrastructure. It’s difficult work that requires the attention of those with the fewest spare time on their hands: the IT specialists at our organizations. But this work—and standards development is a critical element of it—is vital to ensure that in those times when we need it most, our infrastructure doesn’t fail us. Unfortunately, many state governments’ systems are failing those who need it the most, at a time when they can least afford it.

With best wishes for your health and safety,

Todd Carpenter
Executive Director