Looking at Trends – Ten Years Back, Five Years Forward

In 2010, Elon University and the Pew Internet Project jointly published a report about the current state of the Internet as an information environment and the ways in which Americans were adapting. In the news release announcing the report’s availability, a few concluding expectations were featured:

  • Despite published fears, Google was not going to make us stupid.
  • Over the next ten years, reading, writing, and the rendering of knowledge would be improved.
  • Information would continue to flow relatively freely online although there will be flashpoints over control of the Internet.
  • Anonymous online activity would be challenged during the decade, but some believed that anonymity would still be possible.
  • Innovation would continue to catch us by surprise.

Looking at the 900+ survey responses now, it’s clear that some of the Internet gurus were overly optimistic. The ease of using Google to find answers wasn’t what made us stupid this past decade, but rather, some might say, unfiltered postings to various social networks. One might agree that the rendering of knowledge has improved; consider Google's engineering of knowledge graph cards from linked data. Online information in the United States (U.S.) flows relatively freely, even as some foreign governments consider implementing tighter controls over the Internet. The concern about challenging anonymity online is, if anything, somewhat understated as we enter 2020.

What were the tech trends that shaped us in the ten years between the release of that report in February of 2010 and the composition of this article in January of 2020?

  1. Adoption of Mobile Devices

Mobile devices proved to be a game-changer. Mobile phones and tablets (good, bad, and indifferent) gave new meaning to the phrase, ubiquitous computing. The Pew Research Center first began tracking mobile phone ownership in the U.S. in 2011, when just 35% of the population owned such devices; by 2015, some were announcing the arrival of the Post-PC era. By 2019, the percentage owning a mobile phone had risen to 96%. For one in five Americans in 2019, a smart phone is their sole computing device. As that shift in personal computing occurred, the wave of adoption created subsequent shifts in technology, end-user behaviors, and expectations.

  1. Location-based Services

Mobile devices drove rising interest in location-based services. Initially introduced to consumers as a convenient means of identifying the nearest bank, hamburger joint, or Starbucks, a variety of public and private entities grasped the additional benefits of mining customer data. As one 2017 press release put it, “Location-based services are used to locate persons, objects, track vehicle movements, navigation, logistics, and inventory management. Upsurge in penetration...boosted the adoption of social networking platforms, and provided new avenues for location-based marketing & advertising. Moreover, rise in demand for active check-in apps, and use of business intelligence in fraud management & secure authentication has fueled the growth of location-based services market.”

Such an ability to track user location, interests, and behaviors would fuel an ongoing debate over data collection and privacy (more on this later).

  1. Adapted Mobile Behaviors (Second-screening, Photo Apps, etc.)

Location-based services proved useful to a population that carried mobile devices with them on a 24/7 basis. While mobile phones had cameras as early as 2002, when the first Apple iPhone was introduced in 2007, reviews criticized the quality of that particular feature in what was dubbed as being the Device of the Year. However, subsequent generations of the iPhone improved picture quality and, because those cameras were constantly on our persons, photography with one’s phone took off.

In 2010, the Instagram app appeared in the Apple App Store and was followed in 2012 by Instagram for Android. In that short space of time, the way in which users documented their lives shifted. In 2013, The Verge reported on the impact of smartphone cameras on the art of photography. That same year, the term “selfie” became the Oxford Dictionary’s word of the year. (Before five additional years had passed, selfie sticks were banned.)

Portability was important, but even when the owners of those devices were not moving from place to place, their devices were being used. In late 2012, in anticipation of the Super Bowl advertising, Digital Trends reported that ...the second screen provides a way to optimize the TV experience, not a way to divide it. It gives networks a new way to deliver content, it gives advertisers another touch-point by which to contact consumers, it gives consumers an enhanced experience, and it’s probably here to stay.” As reported by Nielsen in 2018, 45% of the U.S. adult population were using those devices while watching television to look up information, to text message someone about the content on-screen, to shop, and to connect with others on social media.

We would not be separated from our devices. As VentureBeat reported in 2013, even the Federal Aviation Administration had to recognize the shift and begin the process of modifying rules for in‑flight device usage. By 2018, one analyst group estimated the value of the global in-flight entertainment and WiFi connectivity market at USD 5.1 billion.

  1. Uploads, Downloads, Streaming, and Moving From 3G to 5G

Mobile connectivity and infrastructure were an ongoing challenge as users became more creative in the use of their devices. Users were uploading an increasing number of photos to social networking platforms and buffering became a source of frustration when watching YouTube videos. MIT Professor Tom Leighton was quoted in a 2012 New York Times article as saying that user expectations of speed were “getting shorter and shorter, and the mobile infrastructure is not built for that kind of speed.” As the decade progressed, waiting for delivery over a 3G, then a 4G connection, seemed interminable. Milliseconds were the metric. Netflix may have started out as a delivery service for DVDs, but by 2013 it was providing streaming services to some 27 million habitual binge-watchers, further fueling impatience for rapid access, as noted by the Harvard Crimson.

Over the course of the past decade, providers have been steadily pushing the existing mobile infrastructure from 3rd generation (3G) network technology towards 5th generation (5G) network technology on a set of harmonized spectrum bands. In the meantime, further explanation of the existing 4G technology may be found here.

The chief benefit of 5G is the anticipated overall connectivity and enhanced speed for a variety of services seeking to handle tremendous amounts of data. In December of 2019, the telecommunication providers began touting a limited roll-out of 5G in specific urban environments, specifically from Verizon, AT&T, and T-Mobile. In practical terms, the shift in speed means that the app that took approximately one minute to download when using 3G may be downloaded in a single second using 5G.

  1. The Value of the Cloud and the Business Model

Amazon introduced its Elastic Compute Cloud (EC2) in 2006. The company characterized it to customers as a “virtual CPU” that would allow businesses to easily scale their services up or down as needed. Five years later, they introduced their Cloud Drive to individual consumers, an initial 5GB of free storage with an available upgrade to 20GB for $20.00. In 2012, EC2 would become a critical component of a suite of services known as Amazon Web Services (AWS). (See 2012 coverage from Wired Magazine.) By 2017, Amazon was using Netflix as a marketing case study for its use of AWS as the infrastructure for delivery of streaming video. Microsoft and Google were not far behind in luring users into cloud storage (SalesForce, Google Drive). By the close of the decade, Forbes was reporting that the public cloud-infrastructure market was worth more than 32 billion and that AWS owned roughly half of that.

Services such as the Google Office Suite were enabled by the availability of the cloud. Enterprises were relieved of the expanding costs of maintaining hardware and software and further relieved that vendors would be responsible for providing 24/7 user support. Worries over support of outdated software would evaporate. From the vendor perspective, moving to a subscription supported business model meant that users could all be using the same version of the software. It represented a higher degree of protection of their investment in software development.

Additionally, cloud computing technology more readily satisfied the needs of the mobile computing audience. Cloud-based applications for that audience could be engineered without being bound by a particular operating system or any limitation of memory for the device.

But even as early as 2013, there were hesitations regarding the embrace of the Cloud and what it might lead to. The Chief Information Officer at Case Western Reserve noted, We have learned some things about the cloud, especially in the last year and, if you want to be more precise, the last couple of months, which are giving a lot of us on our campuses, certainly our faculty, cause for concern. And it does not have to do with the reliability of the service. It has to do with the privacy of the service.”

  1. Amassing Big Data

All of the foregoing changes in technology made possible the collection and storage of a huge amount of data. A 2013 article in MIT Technology Review noted that the amount of data amassed through the collective use of technology had “reached 2.8 zettabytes in 2012, a number that’s as gigantic as it sounds, and will double again by 2015, according to the consultancy IDC. Of that, about three-quarters is generated by individuals as they create and move digital files. A typical American office worker produces 1.8 million megabytes of data each year. That is about 5,000 megabytes a day, including downloaded movies, Word files, e-mail, and the bits generated by computers as that information is moved along mobile networks or across the Internet.” An IDC infographic appearing in that same article noted the expectation that, by this year (that is, 2020), the volume of data amassed would have increased by 2000%.

Over the course of the past decade, a score of experiments, scandals, and breaches have made clear just how much data gets hoovered up through our engagement with online information, services and devices. While some percentage of the data is provided voluntarily and with our consent, the volume of other data that is collected — an additional (potentially much larger) percentage — is never made evident to the user. The scope of the data is vast and its value to corporate entities, government agencies, and institutions lies in how much may be mined to reveal the personal habits, preferences, and activities of an individual.

  1. Leveraging Analytics

As large companies collected user data, they sought out tools for use in analyzing it. The New York Times reported on Target’s uncanny ability to identify major life changes in customers’ lives and adapt messaging to those customers. Google researchers revealed ongoing work with Dremel, a tool that allowed queries to be run with blinding speed across petabytes of data. Without much fanfare, in 2013, Walmart Labs (the technology arm of Walmart Global eCommerce) acquired a small predictive analytics company, Inkiru. Inkiru offered real-time predictive intelligence, big data analytics and a customizable decision engine to inform and streamline business decisions, all of which would allow Walmart data scientists to engineer better customer experiences. By 2015, publishing VPs were addressing researchers at NIH about using text mining and other analytical tools on their data, and bloggers were explaining in plain English the top ten data mining algorithms. By 2018, the Association of College and Research Libraries (ACRL) noted that collection of data in learning management systems and the ethical use of running analytics across such data were a worrisome trend.

  1. What is Surveillance? What is Private?

As a result of these trends, because Target could so readily identify a pregnancy on the basis of consumer behavior and because Cambridge Analytics improperly harvested the data of more than 80 million individual Facebook users, concerns about overreaching surveillance and the need for privacy protections rose prominently. Tech moguls were bluntly telling us that privacy as a social norm was dead. Devices in our homes were listening inappropriately. Europe led in establishing protections in many respects, but there are few established parameters of what is and is not acceptable. Bringing us full circle at the end of the decade, in December of 2019, the New York Times published Twelve Million Phones, One Data Set, Zero Privacy, an investigation into the smartphone tracking industry.  

Placing Your Bets on the Next Ten Years

All that said, what lies ahead in technology during the roaring ‘twenties of the 21st century? At this point, an information industry professional may tire of old news stories and begin wondering about future applications and innovations.

What expectations might there be with regard to computing power? Experts are telling us to look for:

What should we be looking for with regard to the active use of artificial intelligence?

  • Forms of AI will be in use throughout our society. The question is how well it will work in critical areas, such as patient care.
  • Transparency and explanations as to how it may be implemented in systems. (Note: Data trusts may be key to rebuilding trust in proprietary or corporate systems that are fueled by personal data.)
  • Library use of AI in the context of providing users access to enhanced information services and platform infrastructure. Will we see AI support better discovery and relevancy?
  • AI as a means of enhancing cybersecurity.
  • The ethical use of artificial intelligence.

What should we be thinking about, with regard to facial recognition technology? What are the pros and cons?

What will the impact of 5G be for connectivity and access? Experts have a variety of opinions.

Will we indeed see in 2020 “a swift adoption of new, broader, ‘full-cycle’ data science platforms that will significantly simplify tasks that formerly could only be completed by data scientists and boost the productivity of citizen data scientists — business analysts and other data experts who have domain expertise but are not necessarily skilled data scientists”?

Still, one thing remains as true today as it did in 2010. In the coming decade, we can expect that innovation will continue to catch us by surprise.