Skip to main content

Prognostication is a messy business. On January 1, we don't know exactly where things will be at the end of the year. As William Gibson purportedly said, "The future is already here. It's just not very evenly distributed." Which trends will finally push through and catch everyone's attention? Which technological innovations will spark excitement, enthusiasm, or outrage? Will policy or political changes cause trends that had been building to stall? As much as we can resign ourselves to fate and let fortune carry us where it may, we have an opportunity to drive momentum and the changes we would like to see.

Many things on our horizon that seem to me to be priorities. Two of these are security and privacy. These ideas are inextricably tied and one shouldn't take precedence over the other. NISO has supported work on privacy and continues to encourage the library and technology communities to build solutions with privacy in mind from the outset, as this is is key in creating a trusted information ecosystem. Meanwhile, information systems are not nearly as secure as they could or should be. The IP-based authentication that most publishers and libraries use to provide access control is both insecure and faulty in terms of the user experience it provides to patrons. It is about time that it is changed, and now finally there is some momentum behind developing new authentication methods. NISO and the communities with whom we work have begun positioning ourselves to advance an alternative solution; see here for more information on the RA21: Resource Access in the 21st Century initiative, first launched by the STM Association but now opening up to broader participation. This project is only a first step in what will be a long road.

Improving behind-the-scenes interactions between machines is a more challenging priority for NISO; we aren't quite at the point of undertaking related activity, but we're aware that it is an area worthy of concentrated study. Some time ago, I wrote about archiving the "live web,"but since then, ever more of our interactions with content are customized to individuals' contexts, behaviors, and changes in external information. This makes replicating one's online findings multiple times difficult; it is also likely that others' views or experiences of material vary from ours.

The simplest example of this complexity is telling someone to search for something on the web after doing the search yourself, and finding that two sets of top search results are different. However, the complexities go far beyond that, especially when replicating scientific discovery and when considering preservation of online material. What does it mean to preserve content when information is constantly being added, deleted, and amended? There isn't a hard copy to which someone can refer and even the electronic copy you're viewing is ephemeral and subject to change. This is doubly troubling in an era when it is increasingly common for people to deny they said or did something, despite concrete recorded evidence to the contrary. In a "post-factual" world, the maintenance of a version of record becomes even more valuable and the skills our community possesses in that regard should increase in importance.

A third trend is that content providers are increasingly diversifying their portfolios to include much more than content, with most of the major publishers and many smaller providers offering services as well as products. Work by Jeroen Bosman and Bianca Kramer at the Utrecht University Library to catalog and classify these services has provided some useful insights. The proliferation of these new business lines leads inevitably to a greater need for underlying standards to interchange information between services and tools, or between tools, should a user choose to migrate from one product to another. Integration (of the various tools) could simplify the work of researchers and students and speed the sharing of results. If they lack a foundation of standards and interoperability, the new offerings could force communities to share networks that create more barriers than they solve. Traditionally, content providers have built materials around interoperable standards so as to ease distribution and the moving of users from item to item. However, software providers are often driven by competitive pressures to lock users into a proprietary solution. One hopes that as content providers move more toward provision of services and content, they will continue to support interoperability and the standards that make that possible.

In the words of Yoda, "Always in motion is the future." NISO, too, is in motion, with much work ahead of us, but prepared for the future. We're looking forward to another active and successful year.

Todd Carpenter

Executive Director

NISO Reports

New and Proposed Specs and Standards

Draft Charters of W3C Publishing Business Group, EPUB 3 Community Group Available

IDPF and W3C plan to combine later this month, and as part of that move, have released for comment draft charters for the anticipated new W3C Publishing Business Group and the same organization's EPUB 3 Community Group. The versions that are adopted after these drafts are commented upon will be adopted as "first charters" and perhaps changed later.

» Go to story

COUNTER Code of Practice Release 5 Consultation Webinars

Release 5 of Project COUNTER's Code of Practice will be available this month, and will feature, the project says, fewer but more flexible usage reports and a reduced number of metric types. COUNTER is seeking user input ahead of the release and has scheduled three webinars to gather opinions and ideas.

» Go to story

DAISY Releases Tobi 2.6.1

DAISY has released an update of open-source multimedia production tool Tobi. As with previous versions of the tool, users can produce books with full text and audio and that conform to both the DAISY 3 (ANSI/NISO Z39.86-2005 (R2012) Specifications for the Digital Talking Book) and mainstream EPUB 3 specifications. What's new are features requested by users including a way to reverse the last cleanup operation, more robust recording, and implementation of the "aria-details" property for linking the external Diagrammar image description document.

» Go to story

Media Stories

NISO Members' Favorite Science and Technology Articles of 2016

While it was not everyone's favorite year, 2016 still produced plenty of thought-provoking and informative science and technology journalism. Here are roundups of favorite 2016 articles from NISO members AIIM, Los Alamos National Laboratory, Johns Hopkins University, PLOS, and SSRN.

NIST Asks Public to Help Future-Proof Electronic Information

"The National Institute of Standards and Technology (NIST) is officially asking the public for help heading off a looming threat to information security: quantum computers, which could potentially break the encryption codes used to protect privacy in digital systems. NIST is requesting methods and strategies from the world's cryptographers, with the deadline less than a year away."

» Go to story

The Health Data Conundrum

Why is our health data so tempting to hackers? It's because it's the most valuable data they can steal, say Haun and Topol, who explain that the information can be used to order medical equipment and drugs to resell, and to scam insurance companies. "We cannot leave it to the health record software companies...to bring about the needed changes" say the authors, who suggest changes to custody of health information. For more detail on this topic, see videos from the 2016 Privacy Implications of Research Data: A NISO Symposium.

» Go to story

Intake of Digital Content: Survey Results from the Field

"The authors developed and administered a survey to collect information on how cultural heritage institutions are currently managing the incoming flood of digital materials. Questions asked focus on the selection of tools, workflows, policies, and recommendations from identification and selection of content through processing and providing access."

» Go to story

Big (and Open) Data for Scholarship of All Sizes: A New Release of the HathiTrust Research Center Extracted Features Dataset

HathiTrust has announced the release of the HathiTrust Research Center (HTRC) Extracted Features (EF) Dataset, Version 1.0, which provides researchers with open access to data from the full text of the HathiTrust Digital Library. The data, says the organization, is extracted from 13.7 million volumes and represents more than over 5 billion pages. This is not entirely a pioneering move, but it's still a significant one: the organization previously released a dataset derived from the public domain works in its collection, but this release is much larger than that.

» Go to story