Tech Trends: Small Steps but Significant Change

As we unofficially enter the next decade, we can reflect on what the twenty-teens were like and can consider what information distribution will be like entering into the twenty-20s. Oftentimes, we don’t notice significant change unfolding over a longer span of time when, as we go about our daily lives, such changes simply appear as incremental. 

Mobile

While mobile computing got started in the 2000s with the availability of smartphones and the Kindle, mobile computing became ubiquitous in the 2010s. Beginning with the release of the first iPad in the spring of 2010, mobile computing took a significant advance forward. Other devices, such as the Kindle Fire, the Samsung Galaxy, and later the Microsoft Surface, have expanded the marketplace to a point where these devices are commonplace. An interesting transition for these devices is their application.

Because of the rapid adoption of the Amazon Kindle in the late 2000s, and the coincident growth of the availability of ebooks, there was a presumption that the iPad and similar devices would be used to consume text. In fact, one of the first reviews of the iPad, after a section on opening the box and how to use the device, as its second topic focused on the iPad’s functionality to read text. The early presumption based on the environment at the time was that people would consume text on the device. This was quickly turned on its head with the advent and growth of streaming video content. Rapid growth was driven by cloud storage (see below), the power of the devices, increased connectivity speed, and a move to subscription content delivery (see also below). Rather than read a book on a device with access to the world’s libraries or booksellers, the world realized it was far easier to lean into their mobile device to watch a favorite movie, tune into a favorite YouTube celebrity, or play a game. Just look at a recent iPad review; its use for reading receives no mention whatsoever. Of course, there is still a moderately-sized market for e-ink reading devices, selling in the 5-6 million devices per year range toward the end of the decade. However, it is declining. People still do read on mobile devices — apparently a lot — but there is a growing focus on interactivity, particularly with audiobooks.

====

Cloud

Businesses learned quickly that outsourcing infrastructure, such as computing, storage and software management made a lot of sense for information technology. The term Software-as-a-Service first appeared in a US Patent as early as 1985. In the 2010s, the Cloud as a service platform became a term of art in business following its definition in a 2011 NIST report. Today, it has moved into our daily lives. We think nothing of interacting with internet-based storage, or computation in our daily lives, if we’re aware of it at all. For example, an iPhone will “manage” your photo storage for you automatically, maintaining only low-resolution versions of files on your phone and download a full-resolution version if you require it. When interacting with a voice assistant, such as Siri or Alexa, the processing of the voice command is happening on the network. Even as the cost of storage has decreased, new devices still have comparatively limited storage in most models, because of the availability of cloud storage. For more robust research services, it makes much more sense to log onto a cloud service, process research data on the web, and then (in most cases) to move it around the network. Cloud services have transformed information technology in business, providing Fortune 50-level service to start-ups as well as established businesses. It has transformed how we collectively consume media. Instead of buying physical copies of content, an increasing majority of people are subscribing to streaming services in which all the world’s music or movies are available online, albeit for a price, with limited rights, and increasingly only in the right content silo.

====

Metadata 

In the library, information and publishing spaces, metadata is a common discussion item for panels and working groups and with passionate arguments regarding fields and tagging a regular feature. In the 2010s, metadata rose to prominence in the public lexicon. People became aware of metadata’s power — if not aware of its glorious nuances, then at least aware of its existence and potential. In 2013 Edward Snowden’s release of NSA data collection efforts drew attention to the potential, understood by information professionals, but not the wider public. Metadata is the entre into the digital world. Much like the shadow wall of Plato’s Cave, metadata is the lens through which we perceive the digital world. Without metadata, we cannot navigate the digital landscape. We cannot search. We cannot engage nor can we retrieve what we seek. And yet, our travels through the digital leave streams of metadata, breadcrumbs of our personal journeys. Such metadata give a collector or analyst — whether human or machine — deep insight into our hopes, dreams, fears, and desires. The 2010s was the era when metadata became a digital currency for those who knew how to use it. In the coming decade, metadata is the arena where so many future battles will take place pertaining to privacy, security, control, and metrics.

====

The Development of our Robot Overlords

As the decade was closing, it seemed appropriate to celebrate Blade Runner Day by recognizing how far artificial intelligence (AI) had come from the publication of Phillip Dick’s 1968 novel, Do Androids Dream of Electric Sheep? While we are very far away from the generalized AI that is indistinguishable from human intelligence, the past decade has seen rapid increases in the power and availability of tools that incorporate components of machine learning. This includes a suite of technologies ranging from machine perception, natural language processing and voice interactions, memory (i.e., massive dataset gathering/storage/training/recall), robotics, and learning algorithms. With the logarithmic growth in storage and processing power, combined with the related decreases in cost, there has been an explosion of the tools necessary to apply AI-like tools. It has become second nature to speak to our devices and while novel, voice control became more reliable and more ubiquitous in the past decade. We rely on robots to find our way around the world via maps and directions, and we expect to be guided in our information discovery by algorithms and manage many of the details of our lives. Behind the scenes, machines are increasingly controlling complex systems, such as financial markets, energy systems, and telecommunications. There are tentative steps in the direction of autonomous driving, transportation management, healthcare management and application, security, and surveillance. As these technologies have been extended, various challenges and unintended consequences have become obvious, including security, privacy, ethical application, and algorithmic biases.

====

Security

The decade began with a bang from a community best known for operating in the shadows. The revelations released by Chelsea (nee Edward Bradley) Manning in 2010 brought to light the conduct of the US military in Iraq during the second Persian Gulf War (2003-2011). It was also one of the first major challenges to the notion that digital information could be held securely. This notion was put firmly to bed by subsequent releases of information from Edward Snowden a few years later. Such revelations continued throughout the decade — the Panama Papers, the hack of the Democratic servers, the Equifax breach, and on and on and on. The academic community has not been spared. Academic institutions are the third most likely to be targeted, which is at a rate higher than that of retail institutions. Specifically, in the library and publishing world, this took the shape of SciHub, which despite the support from some proponents of open content, has credibly been tied to foreign-government hacking networks. As more and more digital information is created and housed, as more value is assigned to captured traces of our digital interactions, the rate of hacking and stealing these data will only increase. For too long, we have ignored digital security at our peril and it seems we are slow to make the investments to secure our digital resources, particularly compared with the threats. If in the future, as computational power expands exponentially with the advent of quantum computing, it is likely that there will not be a period of secrets for much longer, if ever there was before.

====

Each of these topics position us for the social reactions to these technologies. Technology never operates in a vacuum. It exists in a social structure that limits the application of the technology and creates an environment where it succeeds, grows or fails. These are the topics I will delve into next month in Part Two of this list.

======

 

Looking Forward to PART TWO:

Privacy – Hack – advertising, monitoring, NSA, face recognition, were about metadata and privacy and the ability to track people online

Regulation – Net Neutrality, SOPA/PIPA, IP Protection, Anti-trust, privacy, GDPR

Openness – open source, open access, open ledgers

Metrics – bibliometrics, alt metrics, data analysis, data science

Economics

Data – Research Data publishing, data storage, data interactivity