Protocol Test Harness Crackberry

Posted on by admin
Protocol Test Harness Crackberry Rating: 3,5/5 7748 votes
  1. Protocol Test Harness Crackberry 2

Copyright © 2016 by Amy WebbPublished in the United States by PublicAffairs™, an imprint of Perseus Books, LLC, a subsidiary of Hachette Book Group, Inc.All rights reserved.Printed in the United States of America.No part of this book may be reproduced in any manner whatsoever without written permission except in the case of brief quotationsembodied in critical articles and reviews. For information, address PublicAffairs, 250 West 57th Street, 15th Floor, New York, NY 10107.PublicAffairs books are available at special discounts for bulk purchases in the U.S. by corporations, institutions, and other organizations.For more information, please contact the Special Markets Department at Perseus Books, 2300 Chestnut Street, Suite 200, Philadelphia,PA 19103, call (800) 810-4145, ext. 5000, or e-mail special.markets@perseusbooks.com.Book design by Jeff WilliamsLibrary of Congress Cataloging-in-Publication DataNames: Webb, Amy, 1974– author.Title: The signals are talking: why today’s fringe is tomorrow’s mainstream / Amy Webb.Description: First edition. New York: PublicAffairs, [2016] Includes bibliographical references and index.Identifiers: LCCN 2016028425 (print) LCCN 2016039541 (ebook) ISBN 9781610396677 (ebook)Subjects: LCSH: Business forecasting. Strategic planning. Technological innovations.Classification: LCC HD30.27 .W39 2016 (print) LCC HD30.27 (ebook) DDC 658.4/0355—dc23LC record available at https://lccn.loc.gov/2016028425First Edition10 9 8 7 6 5 4 3 2 1

For the professionals who wants to harness power from the sun through a solar tracking system, many algorithms have already been programmed to perform. SUN PATH AND SUN TRAJECTORY Figure 1.25 Track the sun position (elevation and azimuth) for any location and date using Blackberry (Crackberry, 2014). Description: The Communication Protocol Test Harness is a Windows(tm) application that simulates Master or Outstation devices, monitors communications, or performs custom functional tests. A module to automatically perform the Modbus Conformance Test Procedures is included with the standard Modbus license.


To my daughter, Petra, and to her classmates:We have entrusted the future to you. Be bold and bright.


CONTENTS Introduction “Hello, Are You Lost?” 1 The Instructions A Futurist’s Playbook for Every Organization 2 When Cars FlyUnderstanding the Difference Between Trend and Trendy 3 Survive and Thrive, or Die How Trends Affect Companies 4 Finding the Fringe Seek Out the Unusual Suspects 5 Signals Matter Uncovering Hidden Patterns 6 The “Uber for X” Trend Ask the Right Questions

Protocol

7 Is It Really Awesome?Know When the Timing Is Right 8 If This, Then That A Formula for Action 9 Greetings, Robot OverlordsPressure-Testing Your Strategy 10Reverse-Engineering the FutureGlossary of Concepts and Terms Acknowledgments Notes Index


INTRODUCTION “Hello, Are You Lost?”THE FUTURE DOESN’T simply arrive fully formed overnight, but emerges step by step. It first appears at seemingly random points around the fringe of society, never in the mainstream.Without context, those points can appear disparate, unrelated, and hard to connect meaningfully. Butover time they fit into patterns and come into focus as a full-blown trend: a convergence of multiplepoints that reveal a direction or tendency, a force that combines some human need and new enablingtechnology that will shape the future. It’s something I discovered living in Japan, way back in the twentieth century.Akihabara District, Tokyo, 1997. The bottom of my jeans were already drenched as I made my wayfrom the subway through the downpour and past a cacophony of cartoon voices and computer-generated swirls of electronica. The sheer amount of information and noise made it hard toconcentrate. I had a map written in Japanese, but that wasn’t the problem. The waterlogged paper made itimpossible to read the few characters left that hadn’t blurred entirely. I found myself under someelevated railroad tracks and standing in front of a nondescript door, but the hacker friend I expected tomeet was nowhere in sight. Maybe I was in the wrong place. I shoved my hands deep into my coat pockets and squeezed past a series of twisting alleys alllined with rows and rows of circuits, motherboards, cables, wire cutters, and tiny plastic parts of allshapes and sizes. More information. More noise. There were “no smoking” signs everywhere, but thatdidn’t stop the group of men walking ahead of me. Eventually, I stopped at a tiny electronics shack and tried to read the map again. “Hello,” I heard a tentative voice. “Are you lost?” He was, it turned out, a computer geek, albeit one who had a couple of decades on most of thefolks who make up this species. Tattered back issues of Pasokon Ge-mu and “Oh! X” magazineswere piled up next to disassembled PC towers. I explained that I was trying to find my friend, aregular in Akihabara who was building a new kind of game that could be played on a mobile phone.


The corners of his mouth crinkled upward as he motioned me over toward a counter in the back of thestore. On the glass were two small mobile phones. He gave me one and told me to wait. He took theother in his hands and started tapping on the alphanumeric keypad. A moment later, I saw a messageflash on my screen. —“hello” in Japanese. I’d used mobile-to-mobile messaging before,but tried to muster a “gee-whiz” look so as not to offend him.Then, he sent me another message. This time, the text was blue and underlined. It looked like aweb address, but that wasn’t possible. It was 1997, and back in America, the most exciting mobiletechnology was a compact 1G flip phone that had a retractable antenna. This was something entirelydifferent.“Try,” he said. I pressed a button and the phone started downloading something.“Wait . . . is this a ringtone?” I asked. “Am I on the internet?”On the screen, I moved the cursor down to the link and pressed “enter.” As I did, all the noise andall that information diffused into decipherable nodes of data. I could hear the signals talking.This phone in my hand was an experiment on the fringe, a clever hack. I shifted my thought tonetworks of phones all connecting to the internet, to websites, to the Shinkansen train schedule . . .Another signal. If we could receive information, we would necessarily give out our information,too—passively and directly. We would buy train tickets, right from our phones. Network operatorswould know details about us, what we clicked on, what we downloaded. Service providers wouldearn revenue based on our usage. They would have incentives to provide more bandwidth and fasterspeeds . . .Another signal. I started thinking about all the other early research I’d been hearing about. Japanwas on the brink of a much faster mobile network that would allow for more people to connect atonce. Increased capacity also meant higher speeds, and for the first time, the ability to send files toother devices . . .Another signal. Digital cameras were getting smaller. An engineering professor at Dartmouth wasat work on an active pixel image sensor, something so tiny it could be embedded into a pen. TwoJapanese companies, Sharp and Kyocera, were trying to put image sensors into their phones.Teenagers had become obsessed with puri-kurabu photo vending machines—they regularly visitedwith friends, posing for photos of themselves. They’d use an interactive screen to decorate the photoswith different backdrops and doodles before printing them out as stickers.I listened as the signals connected me to adjacent nodes. I knew of others who wereexperimenting with tangentially related projects. A startup in New York City had successfully wrestedelectronic mail—“email,” for short—from university researchers and turned it commercial. For thefirst time, everyday people were getting online, transfixed by this new medium and excited aboutsending fast, short messages between computers within just a few seconds. Commercial emailnetworks were starting to boom, unable to meet demand. At the same time, consumer behavior hadstarted to shift. People expected and received faster communication. They created digital identitieswith vanity email addresses. They had access to a “reply-all” command—a futuristic megaphone thatbroadcast their messages to large, engaged audiences.And then there was the group of mad scientists out in Sunnyvale, California—engineers who’dcreated the first car-based GPS in the early 1980s. Nothing remotely similar had existed until thatpoint, so they had to borrow an ancient Polynesian term to name the thing they’d built. They called it


the Etak Navigator1 (Polynesian for “a set of moving navigational points”); it was so far ahead of itstime that its value meant little, if anything, to the average consumer. I remembered reading an oldissue of Inc. magazine, where the founder of Etak explained his bigger vision: “Let’s say you’re inyour car, and you want to go to dinner. You’ve got this box on the dash. You punch in ‘Japanese,’ then‘cheap,’ then ‘good sushi.’ The box takes over and guides you to a place.”2 The Etak never made its way into our cars, but standing there, holding this black mobile phone inthe middle of Akihabara, I could imagine a future version of myself using an adapted form of thatfringe technology. I’d punch in “good sushi” and text my hacker friend the GPS coordinates of whereto meet me. Rather than carrying around a camera so that I could take photos, get them developed, andsend them through the mail back to the United States, I’d make a video phone call to my parents andshare my sushi dinner with them, in real time. Suddenly, I realized I wasn’t lost at all. I heard the signals talking, and they were telling me howthis experimental phone from the fringe would eventually enter our mainstream to dramaticallytransform all facets of human life in the future. I was holding a physical manifestation representingbreathtaking change: it would reshape how we operate our businesses, how we work, learn, andrelate to each other. It would democratize our access to knowledge. It would manipulate our attentionspans and shift the neural pathways in our brains. It would speed life up and usher in a universalexpectation of immediate access to information and, inevitably, a culture of on-demand goods,services, and content. “Mirai kara kita ne.” It’s from the future, said the old computer geek. “No,” I told him. “Not from the future.” Because right now, standing in his tiny electronics stall in Akihabara, we were in the present. Justas the phone hadn’t traveled back in time from some futuristic date to 1997, neither was our pre-mapped destiny already written in the stars. It was up to us to listen to the signals talking, and to mapout the future for ourselves.Waterloo, Ontario, 2007. Mike Lazaridis, the cofounder of BlackBerry, was working out on histreadmill at home, staring up at the television. Forgettable commercials cycled through every fifteenminutes. Then one caught his attention. Set against a minimalist black background was a hand holdinga mobile phone, one that had no buttons. A male voiceover began: “This is how you turn it on,” andwith a simple swipe the phone was unlocked, revealing a dozen candy-colored, sleek icons. “This isyour music,” the voice continued, as the phone turned horizontally and album covers appeared, whichcould be flipped through with the simple flick of a finger. “This is the web,” the voice said, and theNew York Times instantaneously loaded inside of a web browser, mimicking exactly what it lookedlike on a computer screen. “And this is a call on your iPhone,” the voice said at last, before Apple’siconic logo faded in.3 Lazaridis, a global pioneer in mobile communications, hadn’t seen the iPhone coming. And yethere was this new trend in mobile technology—a computer-like phone, with no buttons—that wasnow entering the mainstream. He found out about the iPhone via a commercial, just like everyoneelse.4 That summer, Lazaridis got his hands on an iPhone and pried it open. He was shocked by what he


saw—it was as if Apple had stuffed a Mac computer into this tiny, handheld mobile device. Two decades earlier, Lazaridis and a fellow engineering student, Douglas Fregin, had founded acomputer-science consulting company, which they called Research in Motion, or RIM. Theirbreakthrough product was a new kind of mobile phone, which offered workers the ability to send andreceive emails securely while they were out of the office. They called it the BlackBerry.5 BlackBerry quickly became a status symbol as much as an essential productivity tool. “If you hada BlackBerry you were an important person, as at that time a lot of people didn’t have a smartphone,”said Kevin Michaluk, founder of the CrackBerry.com news site. Vincent Washington, who was asenior business development manager, said that new product meetings would often remind him of thatinfamous briefcase from Pulp Fiction. Lazaridis would walk in with his own special briefcase, and“there would be this golden glow of devices.” Brendan Kenalty, who was in charge of RIM’scustomer base management, often found himself chided for his job title. Why on earth would anyoneneed a loyalty and retention strategy for a BlackBerry?6 Lazaridis was curious, but dismissive. With a device that had become so addictive andindispensable—it did earn the nickname “CrackBerry,” after all—RIM had become one of the largestand most valuable companies in the world, valued at $26 billion.7 It controlled an estimated 70percent of the mobile market share and counted 7 million BlackBerry users.8 Lazaridis already had a successful suite of products, so he and his team weren’t watching thefringe. They weren’t paying attention as a new trend emerged—smartphones that would become all-purpose mobile computing devices, with the power of a PC right in our pockets. Rather than carryinga BlackBerry for business and an iPod or a laptop for personal use, consumers would naturallygravitate toward one device that could meet all the demands of their everyday needs and work tasks. Initially, it wasn’t clear that this single-device trend—and especially a phone with such aradically different design—would stick. In addition to disparaging the iPhone’s short battery life andweak security relative to the BlackBerry, Lazaridis mocked its lack of a physical keyboard: “Trytyping a web key on a touch screen on an iPhone, that’s a real challenge. You cannot see what youtype.”9 At its launch, comparisons to the BlackBerry were inevitable, and they were harsh for the iPhone.Adding a calendar event or updating a contact had to be synched manually on an iPhone. There wasno push email, and the inbox system was confusing. The Safari browser offered a stunning interface,but it was extremely slow, even with text-only pages. Apple’s iTunes store may have offered far moreapps, but could they be trusted? They’d been made by outside developers, not certified partners aswas the case with BlackBerry. These arguments further distracted RIM from recalibrating its strategy and from monitoring thefringes of society, even as it was becoming clear that the iPhone was ushering in a new era of mobileconnectivity. Rather than quickly adapting its beloved product for a new generation of mobile users,RIM continued tweaking and incrementally improving its existing BlackBerrys and their operatingsystems. But that first iPhone was in many ways a red herring. Apple swiftly made improvements tothe phone and the operating system. Soon it became clear that the iPhone was never intended tocompete against the BlackBerry. Apple had an entirely different vision for the future of smartphones—it saw the trend in single devices for all of life, not just business—and it would leapfrog RIM as aresult. Cisco and SAP adopted iPhones. Apple and IBM entered into a long-term partnership to develop


one hundred new apps. As RIM executives struggled to understand how they’d been blindsided bythis new trend, the company was forced to launch a desperate marketing campaign that paid iPhoneusers up to $550 to switch back to a classic BlackBerry. In 2012, Lazaridis and his co-CEO JimBalsillie stepped down. By the end of 2014, RIM’s market share had collapsed to 1 percent.10 BlackBerry executives failed to make the necessary leaps like the ones I’d made a decade earlierin Akihabara. I was immersed in the fringe, looking at new experimentation and research, spottingpatterns and working out possible scenarios for the future. They kept their heads down, fixated ontheir successful product. “Success is a lousy teacher,” wrote Microsoft cofounder Bill Gates. “Itseduces smart people into thinking they can’t lose.”11 Success rendered RIM helpless in the end. What about the rest of us? Are we helpless as well,because the future is full of surprise competitors and moonshot devices? Polaroid, Zenith,Blockbuster, Circuit City, and Motorola struggled because the future surprised them, too. Rather thanhelping to create their new reality, executives were instead asking themselves, “How did we missthat?” README.TXTThis book contains a method for seeing the future. It’s an organized approach that, if followed, willadvance your understanding of the world as it is changing. Reading it, you will learn how to think likea futurist, and to forecast emerging trends as they shift from the fringe to the mainstream, and how tomake better decisions about the future, today. If you are in any position of leadership—whether you’re the CEO of a large corporation, amember of a nonprofit board, a mid-level human resources manager, a media executive, an investor, achief marketing officer, a government administrator, a school superintendent, or the head of yourhousehold—you must strategically monitor trends and plan for the future. Failing to do so will putyour organization and your future earnings at risk, but there are greater forces at work. If humans donot make a greater effort to understand the implications of our actions today, we are in danger ofjeopardizing our own humanity. I am a futurist, and I research emerging technology and forecast trends for a living. The term“futurology” comes from the Latin (futurum, or future) and the Greek suffix –logia (the science of),and it was coined by a German professor named Ossip Flechtheim in 1943,12 who, along with authorH. G. Wells several decades earlier,13 proposed “futurism” as a new academic discipline. It’s aninterdisciplinary field combining mathematics, engineering, art, technology, economics, design,history, geography, biology, theology, physics, and philosophy. As a futurist, my job is not to spreadprophecies, but rather to collect data, identify emerging trends, develop strategies, and calculate theprobabilities of various scenarios occurring in the future. Forecasts are used to help leaders, teams,and individuals make better, more informed decisions, even as their organizations face greatdisruption. Technology is the unilateral source of nearly all of the significant things that have changed theworld in the past five hundred years, including movable type, the sextant, the moldboard plow, thecotton gin, the steam engine, oil refining, pasteurization, the assembly line, photography, the telegraph,nuclear fission, the internet, and the personal computer. At some point, these were all mere fringe


science and technology experiments. This is not a book about technology trends per se, as a book of today’s trends would be outdatedand useless even before it came off the press. That’s how fast the world is changing. A book that onlyoffers a series of trends would force you to apply someone else’s vision of the future to your ownorganization, industry, or market. Technology trends themselves—smartwatches, virtual reality, theInternet of Things—make for good media headlines, but they don’t solve for the ongoing questionsfacing every organization: What technology is on the horizon? How will it impact our customers orconstituents? How will our competitors harness the trend? Where does the trend create potential newpartnerships or collaborators for us? How does this trend impact our industry and all of its parts?Who are the drivers of change in this trend? How will the wants, needs, and expectations of ourcustomers change as a result of this trend? To answer these questions, you need more than someone else’s prognostications. You need aguided process to evaluate and adapt the pronouncements made by researchers, other businesspeople,and thought leaders within their professional spaces. You need a way to see the future for yourself. The Signals Are Talking is a systematic way of evaluating new ideas being developed on thefringe that, at first blush, may seem too “out there” to affect you. But in every possible construct, ourfuture is completely intertwined with technology, and as I discovered in Tokyo’s Akihabara District14in 1997, nothing in technology is ever really too esoteric that it doesn’t deserve a few moments ofattention. There is no possible scenario where technology does not play a significant role in the years,decades, and centuries to come. Therefore, the trends we must track and the actions we put into placenecessarily involve technology in some way. The method in this book is made up of six steps. You can think of it as a set of instructions for thefuture—though this is no ordinary instruction manual. First, you must visit what I call the “unusualsuspects” at the fringe. From there, you will uncover hidden patterns, connecting experimentation atthe fringe to our fundamental human needs and desires. The patterns will reveal to you a possibletrend, one you’ll then need to investigate, interrogate, and prove. Next, you’ll calculate the trend’sETA and direction: Where is it heading, how quickly, and with what momentum? However,identifying a trend isn’t enough—as RIM discovered in 2008, when it attempted to launch its self-described “iPhone killer.” You must develop probable, plausible, and possible scenarios in order tocreate a salient strategy in the present. There is one final step: pressure-testing the strategy against thetrend to make sure the action you’re taking is the right one. The instructions are illustrated with stories that range from Sony being brought to its knees byhackers, even though company executives could have easily foreseen its future troubles, to thescientific community being shocked, and then outraged, when it learned that Dr. Ian Wilmut and histeam had cloned a sheep named Dolly. These and other stories may be familiar to you. But when we use the instructions to decipher thesignals, what you see will start to seem quite strange. Your perception of present-day reality will, Ihope, be challenged. You may even feel disoriented. But I feel confident that you will never interpretthe world around you in quite the same way again. Turn the page and listen closely. The signals are talking.


CHAPTER ONE The Instructions A Futurist’s Playbook for Every OrganizationWHAT WAS ONCE top-secret military technology has left the domain of government and is now sitting in my living room, with its batteries recharging. I’ve used it to take photos of mydaughter’s kindergarten class field trips. It came in handy when I noticed a possible leak in our roof. Iflew it after a big winter storm to survey whether my neighborhood streets had been cleared.Realizing they hadn’t, I streamed aerial footage to my neighborhood association and asked that theysend a plow. It’s a drone, and a rather unremarkable one at that. Just like many of the other consumer modelsavailable for purchase, it has four propellers and will fly autonomously along my preset waypoints. In 2015, two drones operated by civilians trying to capture video inadvertently preventedfirefighters from putting out a rapidly spreading California wildfire.1 As a result, the fire crossedover onto a freeway and destroyed a dozen vehicles. There were several incidents of drones flyingaround airports to shoot photos and video, too: in one case reported to the Federal AviationAdministration (FAA),2 a drone just missed the nose of a JetBlue flight. In another, a drone got in theway of a Delta flight trying to land.3 By the end of 2015, the FAA was estimating that a million drones would be sold and given asholiday presents that year4—but neither the FAA nor any other government agency had decided onregulations for how everyday Americans could use them. Close encounters with airplanes promptedconversations about whether or not the airspace should be regulated, which forced dronemanufacturers and the aviation industry into uncomfortable conversations, since each has an economicstake in the future of unmanned vehicles (UMVs). Drones were a fringe technology barreling toward the mainstream, and a lack of planning andforesight pitted dozens of organizations against each other. One proposal from Amazon called for anew kind of drone highway system in the sky, separating commercial UMVs from drones belonging tohobbyists, journalists, and the like. Hobbyists like me would be restricted to flying below an altitudeof two hundred feet, while commercial delivery drones—including the fleet Amazon is planning to


launch—would gain access to a zone between two hundred and four hundred feet overhead. The restof the airspace would belong to planes.5 It certainly sounded like a reasonable plan, but it lacked context: namely, emerging trends fromadjacent fields. No one involved in the proposals and debate considered how restricting the airspacemight impact us in ways that have nothing to do with midair collisions. They dealt with an issue in thepresent-day, but didn’t go through the process of forecasting likely developments that would intersectwith this plan in the future. Let me walk you through how a futurist would address this problem. Since there are many issuesinvolved, let’s analyze just one plausible scenario that connects a series of unrelated dots. It will, Ibelieve, reveal why stopping to focus on flying altitudes alone, rather than mapping out the fulltrajectory of drones as a trend, would result in unintended changes in geopolitics and widespreadenvironmental damage in the future. If commercial drone lanes operate in an altitude range of two hundred to four hundred feet, a newtwenty-five-story apartment building might require a special right-of-way easement, which could becostly and tedious to pursue. So it might be easier for architects to start building laterally. But whowants to walk the length of a football field just to get to a morning meeting? As it happens,ThyssenKrupp, a German engineering firm, has invented self-propelled elevators that can travel bothhorizontally and vertically.6 Rather than taking an elevator up twenty floors, you could take it acrossthe expanse. With these conditions in place, a new kind of building, which I’ll call a “landscraper,”will start to occupy all that empty land covering much of the United States. Environmentalists willprotest, arguing that soil displacement will flood local rivers and streams with sediment, killing offthe plants that feed the fish, which in turn feed terrestrial wildlife. But if the drone-lane proposal isaccepted, we would wind up with busy overhead highways. The only open space would behorizontal. The result: a necessary shift in how our cities are built and maintained. The change would be feltless in places like New York City, where there is scant open land available, and more in lesspopulated areas between the East and West coasts. Landscrapers would be developed in smallercities across the Plains and Midwest, helping to catalyze new centers of business and innovation(where Google has started to lay fiber networks). Our thriving urban centers of the future will be SanAntonio, Kansas City, and Oklahoma City. Established tax bases, congressional districts, andeducational resources would be disrupted. Without proper advance city planning, these new hubs willsuffer from traffic jams and a lack of sustainable basic civic resources, such as housing—issues thathave already become significant problems in communities like Austin, Texas, and San Jose,Sunnyvale, and Santa Clara, California. American farmers will be happy to sell their land, destroying big agricultural corporations likeMonsanto, Dupont, and Land O’Lakes. Without American farms, we’ll find ourselves forced tobecome less than self-sufficient in food resources and more reliant on agricultural imports, changingthe geopolitical power dynamic between the United States and countries such as China, Mexico,India, and Canada, which would become our primary fruit and vegetable providers. All because in 2015, we thought it would be cool to fly an unmanned vehicle up into the air totake some pictures for our blogs and social feeds. This future scenario won’t simply arrive, fully formed, as I’ve just described and as a futuristwould forecast. Rather, it will evolve slowly over a period of years, and as various pieces fall into


place, we would continue to track the trends and recalibrate our strategy. At first, all thesedevelopments will seem novel, unconnected, and random, like odd experiments or impossibletheories hatched on the fringe of society. Without context, those points can appear disparate,unrelated, and hard to connect meaningfully. (Invisible drone highways in the sky? Landscrapers?)But over time, they will fit into patterns and come into focus: a convergence of multiple points thatreveal a direction or tendency, a force that combines some human need and new enabling technologythat will shape the future. Futurists are skilled at listening to and interpreting the signals talking. It’s a learnable skill, and aprocess anyone can master. Futurists look for early patterns—pre-trends, if you will—as the scatteredpoints on the fringe converge and begin moving toward the mainstream. They know most patterns willcome to nothing, and so they watch and wait and test the patterns to find those few that will evolveinto genuine trends. Each trend is a looking glass into the future, a way to see over time’s horizon. Theadvantage of forecasting the future in this way is obvious. Organizations that can see trends earlyenough to take action have first-mover influence. But they can also help to inform and shape thebroader context, conversing and collaborating with those in other fields to plan ahead. No one should plan for a future she cannot see. Yet that is exactly what’s happening every day inour boardrooms and legislative office buildings. Too often, leaders ignore the signals, wait too longto take action, or plan for only one scenario. Not only will first-movers create new strategies, thoughtleadership, hacks, or exploits to align with the trend, they are likely developing third and fourthiterations already. As a trend develops and advances, a vast network is being formed, connectingresearchers to manufacturers, venture capital money to startups, and consumers with strange newtechnologies—such as a drone with such a sophisticated onboard computer that it can be sent onreconnaissance missions well past our line of sight. As is often the case with new technologies, thosein leadership positions wait until they must to confront the future, which by now has already passedthem by. The paradox of the present is to blame: we are too fearful about the intricacies of technology,safety, and the needs of the various government agencies and equipment manufacturers to think morebroadly about how technology like drones might emerge from the fringe to become our futuremainstream. I may be wrong, but I suspect that few, if any, leaders in organizations working on the future ofdrones today are following a futurist’s playbook, giving thought to traffic congestion in San Antonio,farmers in the Midwest, or our potential dependence on Chinese corn in the world we are creating fortomorrow. LIZARD BRAINSWe must dedicate time and effort to planning for the future. However, our fear and rejection of theunknown has been an ongoing thread throughout human history. The fact that we continue to strugglewith this problem, from generation to generation, suggests either that Friedrich Nietzsche was right,and that we’re living the exact same life now that we’ve lived an infinite number of times in the past,7or that we’ve internalized a belief that the future is something that happens to us, rather thansomething that we, in fact, create.


Our resistance to change is hardwired in the oldest, reptilian portion of our brains, which islocated down by the brainstem and cerebellum. It’s that section that’s responsible for our automatedvital functions, such as our heart rate and body temperature. It also controls our “fight-or-flight”response, which has preserved and protected humans throughout our evolution. When that system getsoverwhelmed with a complex new concept or is forced to make a decision about an unfamiliar topic,it protests by causing us psychological distress, fear, and anxiety. Adrenaline floods our bodies sothat we’re physically ready to fight or flee if we need to. Like breathing, our resistance to newtechnology happens automatically, without thought. In 1970, social thinker Alvin Toffler theorized about a “future shock” in his groundbreaking bookof the same name,8 arguing that the emerging computers and the race to space would causedisorientation and fragmentation within our society. British physicist and Nobel Prize winner SirGeorge Thomson posited that the nearest parallel of technological changes taking place in the late1960s to early 1970s wasn’t the Industrial Revolution, but instead the “invention of agriculture in theNeolithic age.”9 At that same time, John Diebold, the American automation pioneer, warned that “theeffects of the technological revolution we are now living through will be deeper than any socialchange we have experienced before.”10 Adapting to big, sweeping disruption or taking risks on unproven technology causes that part ofour lower brains to kick into gear. It’s more comfortable for us to make incremental changes—wetrick ourselves into feeling as though we’ve challenged the status quo in preparation for the future,without all that reptilian distress. Our reptilian brains sometimes tempt us into denying that change is afoot in any meaningful way.Many prominent thinkers would disagree that this is the first time in human history when real,fundamental change is taking place within a single generation, and the driving force is technology. Forexample, economist Robert Gordon argued in The Rise and Fall of American Growth that ourgreatest innovations occurred between 1870 and 1970, and that that era’s level of American ingenuityand productivity cannot be repeated.11 Those one hundred years ushered in life-altering change thatwas immediately observable and uncomplicated: the discovery of penicillin eradicated manybacterial infections; Henry Ford’s assembly-line production brought automobiles to the masses;submarines took warfare below the oceans; robotic equipment replaced humans in factories; radiodelivered the news in every American’s living room. And yet, compared to that time period, the advancements of today are orders of magnitude morenuanced and complex, and without intentional effort, they are difficult to see. Take, for example, thequantum computer. This is an entirely new kind of system capable of solving problems that arecomputationally too difficult for our existing machines. The computer at your home or office can onlyprocess binary information expressed as 1s and 0s. In quantum computing, those 1s and 0s actuallyexist in two states (qubits) at once, allowing computations to be made in parallel. If you build twoqubits, they hold four values simultaneously: 00, 01, 10, and 11.12 When a programmer needs to debug a system, she can write code that copies and extracts thevalues from the correct 1s and 0s. It’s straightforward. In a quantum system, those 1s and 0s formdifferent combinations, and the very act of trying to observe that data as it is in transit changes itsnature. Yes, quantum machines are computers—but they’re not like any computer you’ve seen before.Not in the way they look, or in how they operate, or in the functions they can perform. You may never see a quantum computer, and even if you do, it will appear rather unremarkable—


in the present day, it looks like a big enclosed server rack. The only remarkable aesthetic change inthe farther future is that it will shrink in physical size. But you will benefit from the technologynonetheless: quantum computing will be used for encrypting your personal data and your credit cardnumber when you’re shopping, in figuring out how to extract pollution from the air, and in designingnew personalized drugs and predicting the spread of future public health epidemics. A generation ago, a single computer took up an entire room—and Pluto was still a planet floatingin a theoretical icy belt system beyond the orbit of Neptune. Today, you have access to morecomputing power in your little smartphone than all of the National Aeronautics and SpaceAdministration (NASA) did when it sent Neil Armstrong, Buzz Aldrin, and Michael Collins to themoon. Your smartphone seems pedestrian because you are only exposed to the final product—youdon’t see the underlying technology that powers it, and how that tech is evolving independent of thedevice itself. Yes, we send Tweets to reality TV shows. We Instagram no-makeup selfies. We allowour phones to track and monitor our levels of fitness, our whereabouts, our vital signs. And then weshare our personal information with whoever’s interested, even with complete strangers whom wewill never meet. Just as many people discounted that early internet-connected phone I described in theIntroduction, you may be tempted to argue that our smartphones are toys that cannot be compared toputting humans on the moon—not technological breakthroughs. However, the very technology that’s inyour phone is being used to fundamentally alter the operations of most businesses, to perform life-saving medical tests in remote areas, and to change our political ideas and worldviews. One of the reasons you don’t recognize this moment in time as an era of great transformation isbecause it’s hard to recognize change. Another reason: novelty has become the new normal. The paceof change has accelerated, as we are exposed to and adopt new technologies with greater enthusiasmand voracity each year. Consider the washing machine, a groundbreaking new technologicalinnovation when it was introduced in the early 1900s. It took nearly three decades for more than 50percent of Americans to buy them for their homes.13 In 1951, CBS broadcast the “Premiere,” the firstshow in color,14 and within fifteen years the majority of households had abandoned their black-and-white sets.15 Between 2007, when the first-generation iPhone was released, and 2015, more than 75percent of Americans bought some kind of smartphone.16 In fact, 7 percent of us have now abandonedour landlines and traditional broadband services altogether.17 The year Toffler’s Future Shock was published, about 7,000 new products entered America’ssupermarket shelves. Fifty-five percent of them hadn’t existed a decade previously.18 In 2014, 22,252projects were successfully funded on Kickstarter.19 One of them came from a guy with an idea for acomputerized watch, the Pebble. He raised $10 million from 69,000 individual backers and forcedbig, established companies like Apple and Samsung to hurry up and get their own products tomarket.20 We’ve even had to invent a new term for all the tech startups crossing the billion-dollar valuationthreshold: “unicorns,” because investments on that scale had previously been just a myth. By mid-2015 there were 123 unicorns, with a total cumulative valuation of $469 billion.21 To put thatincomprehensible number into perspective, Uber’s $51 billion valuation was equal at that time to thegross domestic product (GDP) of Croatia.22 The gravitational pull toward what’s new, what’s now, and what’s next has left us in a constant


state of fight-or-flight. Paradoxically, we both worry about and look forward to the latest gadgets andtools. Overwhelmed with the sheer amount of new shiny objects, we don’t take the necessary stepback to connect all the dots and to ask: How does one technology influence the other? What’s reallygoing on? Are we missing a bigger and more important trend? What trajectory are we on, and does itmake sense? These are questions futurists think about all the time. But when it comes to organizations,it’s only after a fringe technology moves into the mainstream that we suddenly raise concerns, attemptto join in, or realize it’s too late—and that an industry has been upended. Because we lack this necessary dialogue on future forecasting, when it comes to technology-driven change, organizations are philosophically schizophrenic, arguing for and against contradictorypositions. We may have initially lambasted Edward Snowden, who in 2013 leaked classifieddocuments about cybersecurity and digital surveillance through the press, but with some distance hascome appreciation. Political leaders, news organizations, and everyday people at one point called forSnowden’s arrest (and worse). Then we changed our minds. In a January 2014 editorial, the New YorkTimes editorial board wrote: “Considering the enormous value of the information he has revealed,and the abuses he has exposed, Mr. Snowden deserves better than a life of permanent exile, fear andflight. He may have committed a crime to do so, but he has done his country a great service. . . . Inretrospect, Mr. Snowden was clearly justified in believing that the only way to blow the whistle onthis kind of intelligence-gathering was to expose it to the public and let the resulting furor do the workhis superiors would not.”23 We don’t suffer from the “future shock” that Toffler warned us about as much as we suffer fromongoing disorientation. We are bewildered at the implications of technology because technology isbecoming more pervasive in our everyday lives. From biohacking our genomes to robots that canrepair themselves, it’s becoming more and more difficult to make informed decisions about the future. But decisions must be made, and either subconsciously or with dedicated effort, each one of us ismaking thousands of them every single day, including two hundred on food alone.24 Which app shouldyou build? Which new innovation should you try? Which startup should you back? In which directionshould you pivot? Those are in addition to the more quotidian decisions, like which movie to watchon Netflix, what song to stream on Spotify, what dinner entrée to order from Seamless, or which ofone of the 2,767 versions of the board game Monopoly to order from Amazon.25 We’ve made a devil’s pact, swapping convenience and efficiency for an ever-increasing tyrannyof information and choice. Technology has forced us to either make poor decisions or make none atall, and it is causing or will eventually lead to cataclysmic, unwelcome disruption. During this periodof intense technological change, we focus too narrowly on value chains rather than thinking about howwhat we’re doing fits into the bigger ecosystem. THE PARADOX OF THE PRESENTAs we marvel at the prospects of genomic editing, self-driving cars, and humanoid companions, wehave to keep in mind that our present-day reality binds us to a certain amount of perceptual bias.Fight-or-flight may have kept our prehistoric ancestors from getting eaten by a saber-toothed tiger, butover time it has stunted our unique ability to daydream about and plan for a better future. Without a guided process, we fall victim to the paradox of the present. We have a hard time seeing


the future because we lack a shared point of reference rooted in our present circumstances. Howcould you explain to a Sicilian living through the plague in the Middle Ages that in just a few hundredyears, not only would we have invented a simple shot to cure us of many diseases, but robots andlasers would help doctors perform open heart surgery? How could you have explained to Henry Ford,as he sent his first Model T through an assembly line, that his grandchildren would see the advent ofself-driving, computerized, battery-powered cars? Do you think that in 1986, as Toyota’s fifty-millionth car came off the line,26 company chairman Eiji Toyoda would have believed that within afew decades the four biggest car companies wouldn’t be Toyota, Honda, General Motors, and Mazda,but instead Tesla, Google, Apple, and Uber? How could you articulate the concept of quantumcomputing—that the same information could both exist and not exist within a computer simultaneously—to Ada Lovelace, when she wrote the first algorithm ever carried out by a machine? Without instructions as a guide, we face the same perceptual bias as all of the generations whocame before us; we have a difficult time seeing how not only the far future will unfold but the nearfuture as well. Organizations, communities, and we as individuals must cope with hundreds of first-time situations driven by technology at a pace unmatched in any other time in history. We experiencethese micro-moments on a near-daily basis: new mobile apps, new wearable fitness devices, newhacks, new ways to harass others on social media, new directives in how to “binge watch” the latestshow on Netflix. Novelty is the new normal, making it difficult for us to understand the bigger picture. We nowinhabit a world where most of the information that has ever existed is less than ten years old. Fromthe beginnings of human civilization until 2003, five exabytes of data were created. We are nowcreating five exabytes of data every two days.27 In fact, in the minute it took you to read that lastsentence, 2.8 million pieces of content were shared on Facebook alone.28 On Instagram, 250,000 newphotos were posted.29 A lack of information isn’t what is preventing us from seeing the future. Searching for drone onthe visible web (the searchable, indexed part) returns 142 million results.30 There are hundreds ofthousands of forum posts, spreadsheets, and comments about it on the hidden web, too—the deeperlayers of the internet that do not show up on searches for a variety of reasons (they require apassword, they can only be accessed using special software, they’re peer-to-peer networks, or theylack the code necessary for a search engine crawler to discover them). The Washington Postpublished 717 stories about drones during 2015 alone.31 The Brookings Institution published 65 whitepapers, op-eds, and blog posts about drones during that same time period.32 Barraged with ever moreinformation, we must now interpret all this new knowledge and data we’re being fed and figure outhow to make all of it useful. Exposure to more information tends to confuse rather than inform us.Thousands of drones are being flown all around the country. Lawmakers have access to plenty ofinformation, and yet they don’t have a plan for the future. Information overload hampers our ability to understand novelty when we see it. This tendency isespecially pronounced when it comes to technology, where exciting new products launch daily. Joost,a much-hyped video service called a “YouTube killer” by tech reporters, raised $45 million inventure capital before launch.33 Color, a photo-sharing app created by two charismatic, populardenizens of Silicon Valley, raised $41 million as a prelaunch tech startup.34 AdKeeper raised $43million before launch, billing itself as a new kind of digital coupon clipping service.35


In all three cases, the founders promised something unique. But novelty is a distraction, not aclear trend worth tracking. Joost’s investors lost all their money—the timing for streaming videowasn’t right in 2006. Color was a confusing product that consumers didn’t understand and that techbloggers hated. AdKeeper’s pitch sounded interesting, but in practice no one wanted to save thebanner ads they saw online. That’s $129 million in investment that evaporated, and I’ve only givenyou three examples. The paradox of the present impairs our judgment when we’re looking for far- and near-futuretechnologies. If we’re not mistaking trendy apps for bona fide trends, then the paradox tricks us intomistaking a wave of disruption as a once-in-a-lifetime occurrence, so we dismiss that disruption as anovel circumstance—when it’s anything but. PERILS OF THE PARADOX: SONY’S DAEMONSSony, the giant media and electronics company, is all too familiar with the paradox of the present.Sometime in early February 2014, hackers obtained the credentials for two corporate user accounts atSony Pictures Entertainment. Courtney Schaberg, a vice president of legal compliance at Sony, sent anemail to the company’s chief counsel and other executives about the breach, writing that theunauthorized user may have uploaded malware. “The two accounts have been disabled,” she wrote,adding that a colleague was looking into the matter. In a follow-up email, Schaberg said that thehackers had infiltrated SpiritWORLD, a kind of central nervous system for Sony’s distribution ofmedia files as well as billings, bookings, and the like.36 Rather than planning in a meaningful way for the future—like searching for zero dayvulnerabilities (software holes that Sony hadn’t discovered yet), or listening to hacker communitychatter about emerging malware and exploits—the executives instead brushed off the incident. Theyweren’t paying attention to what the signals were telling them—that hackers were increasinglyfocusing their attention on corporations. Epsilon, the largest email marketing service company in theworld, had also been hacked, exposing the account information of 2,500 customer email lists forbusinesses ranging from Wal-Mart to Capital One.37 Hackers had compromised 70 of the in-store PINpads at arts-and-crafts chain Michaels, stealing credit and debit card information, which was laterused to forge ATM cards that got used throughout California and Nevada.38 Citibank revealed thathackers had compromised 200,000 credit card accounts.39 While all these breaches were serious, the underground hacking community had always regardedSony as one of the biggest targets. Sony first raised the ire of the tech community when in 2005 thecompany’s music division took an aggressive stance on its CDs. Sony embedded two pieces ofprotection on its CDs, which prevented them from being copied—but which also secretly installedrootkits onto a computer without the user’s knowledge or permission. (A rootkit is a special kind ofsoftware that is used to gain control of a computer system without being detected.) In a sense, Sonyitself was acting like a hacker, deploying its own malicious code and getting lots of detailedinformation, such as listening habits, sent back to the company. Because the software would runcontinuously, it caused a strain on the computer’s CPU, which ultimately made the whole machinework slower. The average person couldn’t easily uninstall the rootkits, which was problematic,especially given that within just two years Sony had sold more than 20 million infected CDs.40


Ultimately, there were big media stories and lawsuits. The Federal Trade Commission (FTC) gotinvolved, finding Sony in violation of US law, forcing the company to clearly label protected discs,and prohibiting it from installing any software without a consumer’s prior consent.41 Any futurist would have heard the signals talking, as there were clear harbingers of what was yetto come. The hacker community, which equated Sony’s actions to collusion within the industry tocontrol what we’re allowed to do with our computers, was incensed. Across internet bulletin boardsand listservs, there were calls to infiltrate Sony’s servers. A few years later, a hacker collectiveknown as fail0verflow found the security codes for the PlayStation 3 and posted a very basic,rudimentary hack online. Next, George Hotz, a high school student who went by the username“GeoHot” and had gained notoriety for jailbreaking his iPhone, announced that he’d found the PS3root key, which allowed anyone to jailbreak the console of PlayStation 3 to run both homemade andpirated software. Hotz not only posted the details on his website, he made a YouTube videoexplainer.42 Needless to say, this didn’t go over well at Sony, which threatened to sue anyone for posting ordistributing the code and demanded that a federal judge order Google and Twitter to hand over the IPaddresses and any other data available for anyone involved. Sony successfully won a temporaryrestraining order, forcing Hotz to surrender his computers to the company.43 It won the right to unmaskthe IP addresses of everyone who had visited Hotz’s website. Sony followed up by releasing amandatory firmware update that would prevent the PS3 from executing any unauthorized code.44 That response only baited the hacker community, which was now ready for war. That firmwareupdate was cracked within hours by KaKaRoToKS, a well-known hacker activist. Someone launchedHasSonyBeenHacked.com, which enthusiastically tracked each and every new exploit. The hackercollective Anonymous mobilized its network, urging hackers to go after Sony in retaliation for thePS3 lawsuit and for trying to throw Hotz in jail, posting online: “Your corrupt business practices areindicative of a corporate philosophy that would deny consumers the right to use products they havepaid for and rightfully own, in the manner of their choosing. . . . Having trodden upon Anonymous’rights, you must now be trodden on.”45 Again, no action by Sony. Remarkably, inside the company, executives treated these incidents asnovel, one-off attacks. They were focused on their successful products, but they hadn’t includedtracking and acting on trends in cybersecurity. In the months that followed, hackers got into Sony’s PlayStation network, stealing the usernames,addresses, birth dates, passwords, password security answers, profile data, credit card numbers, andpurchase/billing history for 75 million people—which wound up costing the company $171 million.46There were twenty-one known major attacks within the next six months.47 By 2014, hackers had lost interest in hacking the PlayStation. But they hadn’t lost interest in Sony.Gaming is just one part of Sony’s global business. The corporate giant also operates divisions inelectronics, music, network services, and financial services. Its products range from image sensorsand semiconductors to digital cameras and LCD televisions—and, of course, movies. Despite the numerous attacks, it is clear that Sony hadn’t made plans to come to grips with theproblem. On November 24, 2014, nine months after Schaberg, the VP of legal compliance, sent hermessage about the SpiritWORLD infiltration, a disturbing image took over all of the employeecomputer screens at Sony Pictures Entertainment: a realistic-looking red-tinted human skeleton with


claws for hands that seemed to be reaching out of the monitor. Text overlaying the image said: “We’veobtained all your internal data including your secrets and top secrets. If you don’t obey us, we’llrelease data shown below to the world.” There were five links, which went to zipped files, and an11:00 p.m. deadline in a yellow font. The hacker group called itself #GOP, or “Guardians ofPeace.”48 They remotely wiped the hard drives, shut down email, and stole troves of private companydata.49 Those links routed to directories containing highly sensitive internal data, including passwords,credit card numbers, social security numbers, and contracts for Hollywood celebrities, such asSylvester Stallone and Judd Apatow. They included the same for Sony Pictures employees—alongwith their salary information. The hackers had not only released the information, they had preservedthe file structure and original nomenclature, which revealed that Sony had been storing documentsunder plainly labeled filenames like “YouTube login passwords. xlsx,” “Important Passwords-TAAS,Outlook, Novell.txt,” “Password/Social Password Log.xlsx,” “SPI Employees Levels_401(k)sort_pass wordv2.xls,” and a catch-all “UserNames&Passwords.xls.”50 Security experts were stunned by what they saw. Passwords in plaintext. Unencrypted Excelspreadsheets. Open company fileshares from which terabytes of data could be exfiltrated by anyonewho knew how to click to open a basic computer file. Sony had been trapped in the paradox of the present, continually assuming that each new exploitwas novel and unique. By not taking a long view and planning for the future, the company hadallowed a tawdry, humiliating look into the inner workings of Sony Pictures and Hollywood. A longstring of emails between producer Scott Rudin and Sony Pictures’ former co-chairman Amy Pascalincluded one where Rudin called Angelina Jolie a “spoiled brat” who was “a camp event and acelebrity and that’s all”; “the last thing anybody needs,” he wrote, “is to make a giant bomb with herthat any fool could see coming.”51 There were also emails between Sony and Motion PictureAssociation of America (MPAA) attorneys. One message included an attachment for an October 8,2014, agenda in which the parties were set to discuss “scalability and cost of site blocking” and howto migrate blocking to mobile apps at the MPAA office in Sherman Oaks, California.52 Sony, alongwith the MPAA and five other studios—Universal, Fox, Paramount, Warner Brothers, and Disney—were secretly working on legal and technical maneuvers that would allow Internet Service Providers(ISPs) like Comcast to block access to any website hosting pirated Sony Pictures content. The more the hackers dug into the files, the more the circle of damage widened. The hackers usedthe websites Pastebin and GitHub to share daily communiqués, and they also operated a daily emailblast to members of the news media. Soon, they revealed their primary demand: they wanted Sony tocancel its planned release of The Interview, a comedy about two hapless Americans sent toassassinate North Korean leader Kim Jong Un. This attack provides a clear example of how, in our modern age, one technological invention ormisstep in a perceived silo actually affects myriad other industries and individuals. Ultimately, SonyPictures’ failure to track the future of cybersecurity resulted in legislation creating a legal frameworkfor federal agencies and companies to collect and use your personal data, even if you aren’t beinginvestigated for a crime. Here’s how a futurist would connect the dots: Canceling a wide release of a film also meant pulling an estimated $10 million to $12 millionmarketing spend with other companies.53 Outside of Hollywood, the hackers were threatening acts ofphysical terrorism at American movie theaters, which lost millions of dollars in potential box-office


revenue. Sony > theater businesses > the entertainment economy. Money was an issue, but the devastating 2012 mass shooting inside an Aurora, Colorado, movietheater showing The Dark Knight was still on everyone’s mind. Security experts reviewing theleaked files could not confirm that the hackers were from North Korea, but there was enoughevidence to take the threat seriously. Who would risk another horrifying attack on innocentmoviegoers? Although the Department of Homeland Security said there was “no credible intelligenceto indicate an active plot against movie theaters within the United States,”54 Sony complied withdemands, halting distribution of the movie.55 Lawmakers, including President Barack Obama, urged Sony to show the movie anyway. In aDecember 19 press conference, Obama said that the studio had made a “mistake” in canceling itsplanned release. “We cannot have a society in which some dictator in some place can start imposingcensorship in the United States. . . . I wish [Sony had] spoken to me first. I would have told them: Donot get into a pattern in which you’re are intimidated.”56 Sony > theater businesses > the entertainment economy > freedom of speech activists > North Korean geopolitical relations with the United States and US allies. Politicians used Sony’s breach as leverage to once again try to pass controversial cybersecuritylegislation. Representative Peter King (R-NY) reignited debate over the Terrorism Risk InsuranceAct, which would reimburse insurers for terrorism-related losses, corporate or otherwise. SenatorDianne Feinstein (D-CA) said she would work to pass a cybersecurity bill as quickly as possible.Senator John McCain (R-AZ) said he would pursue the Secure IT Act, a competitor to the highlycontentious Cyber Intelligence Sharing and Protection Act.57 McCain made good on his promise. In March 2015, the Senate Intelligence Committee held aclosed meeting on S.754, the Cybersecurity Information Sharing Act of 2015, and voted 14–1 toadvance the legislation.58 The House companion legislation, the Protecting Cyber Networks Act,passed on a bipartisan vote of 307–116.59 The Electronic Frontier Foundation called it an “invasivesurveillance bill that must be stopped,” arguing that it “gives companies broad immunity to spy on—and even launch countermeasures against—potentially innocent users.”60 Sony > theater businesses > the entertainment economy > freedom of speech activists > North Korean geopolitical relations with the United States and US allies > passage of controversial cybersecurity legislation that had been defeated years ago. In the end, Sony changed course, announcing that it would release its film to independent movietheaters willing to carry it. Google offered to release it via YouTube and Play, its on-demandplatform, and Sony agreed. During its opening weekend, The Interview grossed $15 million onlineand $3 million in theaters.61 (Sony spent $44 million producing the film.62) But Sony’s financial loss on that film was just the tip of the iceberg, since its lack of foresighttrapped the company in a seemingly endless paradox of the present. Nearly a decade earlier, the


executive director of information security at Sony Pictures had met with an auditor who had justcompleted a review of the company’s security practices. That was in 2005, just after the first big hackin retaliation for the CD malware. In an interview published just after that meeting, the auditor hadrevealed some of Sony’s security flaws, such as poor passwords and unencrypted files. In theresulting story in the November 2005 issue of CIO magazine, he said that it was a “valid businessdecision to accept the risk” of a security breach, and that it wasn’t worth the money or effort to planfor the future of cyberattacks directed at Sony.63 A former Sony staff member anonymously told a security reporter at the website Fusion that “thereal problem lies in the fact that there was no real investment in or real understanding of whatinformation security is.”64 Other former employees were quick to tell media sources that Sonybelieved each of its attacks to be a novel, once-in-a-lifetime breach rather than part of a bigger, moredisturbing trend in cybervandalism. Risk assessments were done regularly, in order to identifyvulnerabilities, but staff said that those reports were not always acted on.65 Sony. Drones. BlackBerry. These are just three examples of how the paradox of the presentobstructs our thinking about and planning for the future. To break through the paradox, you mustbecome chronologically ambidextrous, and be able to focus on the needs of your immediate and very-near future while simultaneously allowing yourself to think critically about a time far into the future.In order to do that, you need a futurist’s playbook. FUTURE FORECASTING IS A PROCESSOnly 1 percent of humans are truly ambidextrous, with equal amount of ease using either their left orright hand. Researchers believe that many people who think they are ambidextrous are actually leftieswho have had to adapt to a right-handed world.66 With practice, you can train yourself to use bothhands asynchronously and with fluidity. In fact, if you’re a piano player, this is a skill you’ve alreadymastered to some degree. Composers Sergei Rachmaninoff and Thelonious Monk both created musicthat defy human dexterity—and yet, with enough practice, a skilled musician can learn to play theirclassical and jazz piano pieces with technical proficiency. Forecasting the future requires a certain amount of mental ambidexterity. Just as a piano playermust control her left and right hands as she glides around the keyboard playing Monk, you need tolearn how to think in two ways at once—both monitoring what’s happening in the present and thinkingthrough how the present relates to the future. Forecasting involves a series of six steps, which I’ll getto shortly. For now, you need to know that the steps are governed by the following rules: 1. The future is not predetermined, but rather woven together by numerous threads that are themselves being woven in the present. 2. We can observe probable future threads in the present, as they are being woven. 3. We can impact our possible and probable futures in the present. To most people, time feels linear, with a beginning, a middle, and an end. However, the eventshappening during a particular time are neither predestined nor bound to follow a set path. Instead,


individual events are influenced by known and, more problematically, unknown variables. In physics, the Heisenberg uncertainty principle states that you can never know both the exactposition and the exact speed of an object—essentially, everything influences everything else.67 (Forexample, to know the velocity of a quark, we have to measure it, and the very act of measuring it canaffect it in some way.) If we subscribe to the laws of the universe, we must agree from the outset thatthere is no one, predetermined future, but rather a possibility of many futures, each depending on avariety of factors. Future forecasts are probabilistic in nature, in that we can determine the likelihood and directionof how technology will evolve. It is therefore possible to see elements of the future being woven inthe present, as long as we know how to see the entire fabric at once, not just a small, finite piece of it. Picture a millhand working in a massive factory that looks like an enclosed football field, with aseries of lines hung across every yard line. On one side of the line are buckets of raw cotton, whichare being fed through big rollers and slightly twisted onto a bobbin. A worker runs up and down hispart of the row watching for breakdowns, snags, or jams. In another room, there are workers mountingthat yarn onto an enormous frame, where threads are woven between wires on a rotating beam.Eventually, beams would be affixed to a loom, where, line by line, elaborate cloths and textiles arewoven with an infinite variety of patterns. If one of the workers in the factory looks at a one-inch-square swatch of cloth, the colors maylook interesting, but he would have a difficult time seeing what those threads, together, signify. Hewould need a system in order to help him see both the detail of the thread and the entire loom, wherepatterns reveal a complete picture. In fact, the millhands could reconstruct exactly what’s there—bymethodically reviewing inch by inch and recognizing patterns. Indeed, the millworkers by necessityremain narrowly focused on their present tasks, because their immediate responsibility is making surethat day-to-day benchmarks—a certain amount of yarn or cloth woven per worker, a minimum amountof time or product lost—are being met. Even the most technically savvy among us are often unwitting millhands, as managing theoperation of a modern organization has become a complicated, formidable task. We are all engaged insome form of strategic thinking and development, like creating annual budgets or three-year strategicoperational plans, work that is essential in order to confront the strategic environment. But taking astep back, looking at the patterns in order to understand the cloth as it’s being woven—andintervening in order to change the course of events—is more time consuming and difficult. This isforecasting: simultaneously recognizing patterns in the present, and thinking about how those changeswill impact the future. You must flip the paradigm, so that you can be actively engaged in buildingwhat happens next. Or at least so that you’re not as surprised by what others develop. Joseph Voros, a theoretical physicist and professor at Swinburne University of Technology,offered my favorite explanation of future forecasting, calling it “an element of strategic thinking,which informs strategy-making,” enriching the “context within which strategy is developed, plannedand executed.”68 The forecasting method I have developed—one, of course, influenced by other futurists butdifferent in analysis and scope—is a six-part process that I have refined during a decade of researchas part of my work at the Future Today Institute.69 The first part involves finding a trend, while thelast two steps inform what action you should take. These are the instructions:


1. Find the Fringe: Cast a wide enough net to harness information from the fringe. This involves creating a map showing nodes and the relationships between them, and rounding up what you will later refer to as “the unusual suspects.” 2. Use CIPHER: Uncover hidden patterns by categorizing data from the fringe. Patterns indicate a trend, so you’ll do an exhaustive search for Contradictions, Inflections, Practices, Hacks, Extremes, and Rarities. 3. Ask the Right Questions: Determine whether a pattern really is a trend. You will be tempted to stop looking once you’ve spotted a pattern, but you will soon learn that creating counterarguments is an essential part of the forecasting process, even though most forecasters never force themselves to poke holes into every single assumption and assertion they make. 4. Calculate the ETA: Interpret the trend and ensure that the timing is right. This isn’t just about finding a typical S-curve and the point of inflection. As technology trends move along their trajectory, there are two forces in play—internal developments within tech companies, and external developments within the government, adjacent businesses, and the like—and both must be calculated. 5. Create Scenarios and Strategies: Build scenarios to create probable, plausible, and possible futures and accompanying strategies. This step requires thinking about both the timeline of a technology’s development and your emotional reactions to all of the outcomes. You’ll give each scenario a score, and based on your analysis, you will create a corresponding strategy for taking action. 6. Pressure-Test Your Action: But what if the action you choose to take on a trend is the wrong one? In this final step, you must make sure the strategy you take on a trend will deliver the desired outcome, and that requires asking difficult questions about both the present and the future. These six steps help to identify the future of x, where you might define x as: driving, governing,banking, health care, journalism, national security, shopping, insurance, orchestras, K-12 education,law enforcement, movies, investing, or any number of other fields. That’s because technology ispermanently intertwined with everything we do, and researching tech trends should be embedded intothe everyday operations of a twenty-first-century organization. CHANCE AND CHAOSA chance event can alter the future of anything, from a baseball game to the traffic on your commutehome. It can also dramatically affect a textile. Indeed, anyone who has ever spent time knitting willtell you that one small deviation can completely transform the outcome of a scarf. Knitting creates tiny“v’s,” which interlock and build upon each other in rows. One dropped v won’t be immediatelynoticed, until a long tear appears. Additional v’s will embed themselves, causing the scarf to changeshape. Things get even more complicated with multiple thread colors. By sheer chance—perhaps adistraction, or a miscount, or even an intentional omission in order to experiment—the future of thescarf is forever altered by just one little v. We have a general sense of what outcome is likely—somekind of fabric that hopefully can be used as a scarf—but for even the most seasoned knitters there issome probability that deviations will result in a final product that may not match the initial idea. Forecasting the future is subject to chance and chaos. Every action can cause effects across entirecomplex systems. The emergence of one new technology may raise the probability of any number ofoccurrences, because it might change our economic circumstances, social dynamics, financialopportunities, political access, or any number of other factors. Environmentalist John Muir once


explained this phenomenon: “When we try to pick out anything by itself, we find it is hitched toeverything else in the universe.”70 In our modern age, technology is inextricably and especiallywoven into the fabric of our organizations, our societies, and our everyday lives. Sometimes, what may seem like a bunch of wayward or random v’s is actually part of a largerpattern that’s evolving. Chaos theory tells us that any complex system is dynamic, and that a multitudeof results are possible. Therefore, rather than attempting to predict a singular outcome, we insteadproject a set of possible, probable, and preferred scenarios, using trends as anchors. History informs us that scientists in Scotland successfully cloned a sheep named Dolly, born in1997, and that the scientific community cried foul only after the news had been published.71 In theend, their objections didn’t stop the researchers from continuing their work. What history doesn’t tellus is what might have happened if public outcry over their research had led to the UK Parliamentenacting emergency legislation, arresting the scientists and forever banning embryonic cloning. Itdoesn’t tell us what might have happened if, rather than responding with anger, the scientificcommunity had instead immediately started in on a secondary round of research to clone specifictissues, like Dolly’s right lung. It doesn’t tell us what might have happened if Dolly had only lived amonth. Or what might have happened if a massive earthquake, decimating the west coast of Scotland,had occurred the day that announcement was made. Some may argue that given the rate and expansive scope of technological innovation and ourcultural and political response to it, it is impossible to forecast the future. “How can I, or anyoneelse, possibly anticipate the future, given how quickly everything seems to be changing?” you mightwonder. This is why forecasting the future requires thinking in contradictory ways. We must accept that thefuture is not predetermined—that we can both know what’s past the horizon and intervene to shape it—while simultaneously acknowledging that any number of variables, at any time, can influence anoutcome. We must solve the paradox of the present by practicing ambidextrous thinking. Using theinstructions, which are governed by the three rules, we can focus on finding interconnectedrelationships between one or more technologies and thinking systemically, rather than becomingfixated on a single, promising new gadget or app. Seeing the future is possible, even though the rate of technological advancement has begun tooutpace the speed at which people are accustomed to working and making decisions. Forecastingwhat’s ahead is a matter of recognizing emerging trends and then taking the right action at theappropriate time. Look no further than Nintendo, IBM, Diebold, Wells Fargo, and 3M. Thesecompanies are more than one hundred years old. More than once, emerging technologies and fickleconsumer behavior have threatened to destroy their businesses, and yet they all continue to thrivetoday. For example, IBM was founded in 1911 as the Computing-Tabulating-Recording Company, andit manufactured time-keeping systems, scales, and punched-card machines. In 1924, the companyadopted a new name—International Business Machines, or IBM—and reinvented itself as a servicethat could keep track of vital statistics and, in later decades, other data, such as Social Securitynumbers. By the 1960s, IBM was making computers for big government agencies and corporations.Two decades later, it partnered with a new software upstart called Microsoft and manufacturedpersonal computers. IBM clones permeated the market, so it pivoted to becoming a services company,investing in advanced software. In 1997, IBM’s Deep Blue beat world chess champion GarryKasparov, who resigned after just nineteen moves. In 2015, IBM’s artificially intelligent computing


platform Watson was assisting doctors at the Mayo Clinic and at the Memorial Sloan KetteringCancer Center with complex diagnoses.72 Everything new now seems novel, because the changes heading our way will seem tooextraordinary to become part of our daily lives. And yet, I have a drone in my living room.Landscrapers, hackers taking down one of the world’s largest companies, embryonic cloning,artificially intelligent computers assisting doctors—these technological events will not only becomecommonplace, they will provide the essential basis for our human-machine evolution. The question we are going to explore throughout this book is this: How do we make the soon-to-be-normal feel less novel? The instructions will help us find the answer. But first, we ought todistinguish between what is a real trend—and what’s merely a shiny object.


CHAPTER TWO When Cars Fly Understanding the Difference Between Trend and TrendyWHEN YOU WERE a kid, you probably imagined yourself not just riding in the family car of the future, but flying in, say, the Jetsons’ hovercar or Luke Skywalker’s Landspeeder, or aSpinner soaring in congested lanes above the dark streets in Blade Runner. For me, it was the DMC-12 DeLorean with its dashboard computer, flux capacitor, and gull-wing doors. I daydreamed aboutthat car all the time. Its parts were old and rusty, but it flew through the space-time continuum, not justthrough the air.1 Dreams like these are part of a centuries-old quest for autonomous transportation, and flying carshave been a persistent, trendy theme within our popular culture on and off for more than a hundredyears. Since John Emory Harriman filed the first patent for an aerocar in 1910, we’ve beenalternatively excited and disappointed by a steady stream of prototypes and promises. WaldoWaterman’s Arrowbile was the first to leave the street for the sky in 1937. Three years later, HenryFord remarked confidently, “Mark my word: a combination airplane and motorcar is coming.”Aviation publicist Harry Bruno clarified, saying that cars of the future would look like tiny “copters”;when school let out, they would “fill the sky as the bicycles of our youth filled the prewar roads.” In1949 Life magazine featured the Airphibian, an aerocar that could fly from a backyard airstrip toLaGuardia Airport and then transform into a convertible-like vehicle capable of driving to TimesSquare.2 The dream of flying cars continued into the twenty-first century and up to the present day aspeople built new prototypes with vertical take-off and landing capabilities, super-strong carbon fiberbodies, ducted fan propulsion, and cheaper flight-stabilizing computer systems. Although thesefuturistic aerocars look completely different from their twentieth-century prototypes, there hasn’t,materials aside, been any innovation in design since Harriman filed that first patent. Flying cars are now a synonym for failure: a perceived lack of innovation or an inability toaccurately forecast the future. No one has been more vocal about this than venture capitalist PeterThiel, who cofounded PayPal with Elon Musk and was Facebook’s first outside investor. Thiel has


stood before countless audiences arguing that we no longer live in a technologically acceleratingworld, and that there haven’t been any true, futuristic innovations since the 1960s. In fact, his firm’sslogan, “We wanted flying cars; instead we got 140 characters,”3 is a slight to both innovators and toTwitter. But rather than a poster child for technological failure, flying cars—or the lack thereof—illustratewhy spotting trends is so difficult. Flying cars are trendy, shiny objects; the concept emerges every tenyears with regularity. Henry Ford’s innovations in car manufacturing were followed by Buick’sAutoplane and the Bryan Autoplane concepts in the 1950s,4 the Wagner Aerocar in the 1960s,5 theAVE Mizar (which combined a Ford Pinto with the rear end of a Cessna plane) in the 1970s,6Boeing’s Sky Commuter prototype in the 1980s,7 and so on. Meanwhile, there is a real trend worthfollowing, but it wasn’t cars that can fly. Rather, each decade brought significant advances inautomobile technology that resulted in humans having to devote less and less of our attention tophysical driving tasks. As those advances took place, we failed to realize how important they wouldeventually become for bringing about a whole different paradigm shift. Compounding the problemwere the paradox of the present—the bias of paying most attention to the last few signals you’ve seen,read, or heard—and the difficulty the human brain has in describing something new using terms andideas we don’t yet understand. Rather than tracking the trend in autonomous transportation, which hasto do with infrastructure, artificial intelligence, and a lot of computer systems you can’t really see, wenaturally looked for what was novel within a familiar frame of reference. And so we established cars—but with wings! up in the air!—as the signal to follow. Distracted by what is temporarily trendy, wefind ourselves continually disappointed. Though they’re difficult to identify correctly, trends are vitally important, because they are thenecessary signposts that must be recognized early for you to be a participant in shaping the future.And, flying cars aside, there are game-changing innovations in some stage of development takingplace right now across many different fields that will transform not just travel but human longevity,communications, education, governance, and more. In fact, flying cars don’t represent failure; theyillustrate how the promise of exciting new technologies sometimes obscure real change that’s actuallyunderfoot.Flying cars are not the only example of how we misread the future. Another, also concerning the trendof autonomous transportation, comes from the late 1800s. US cities were struggling with the pressuresof America’s fast-growing population, much of it due to a heavy wave of new immigration. Chicago,for example, had great architecture and music, but it also had serious drainage problems, largely dueto these population pressures. Too many people and animals crowded its unpaved streets. As thepopulation increased, a larger number of horses were needed for transportation, and people had toslog through horse manure and squalor to get around by foot. Walking was not that much faster than theslow-moving public transit system, which was powered by horse-drawn “omnibuses” that seated onlytwenty people at a time.8 The very first gasoline-powered cars were by then being given their first tests in Springfield,Massachusetts, and Germany, but they were experiments out on the fringe. The common frame ofreference for people thinking about transportation was that image of people on foot, trying desperately


to walk through those crowded streets. They, too, were hampered by the paradox of the present. Alfred Speer, a New Jersey inventor and wine merchant, wanted to solve the hassle of gettingaround. His idea was drawn from the immediate vantage point outside his shop: there was anomnipresent mob of people moving every which way around the sidewalks and streets. What if,instead of walking, they were organized into a neat file as they commuted around the city? Speer’sidea was an invention to automatically move people around, without actually requiring them to do anywalking themselves: a sidewalk that moved.9 Moving sidewalks promised to organize Chicago’sthrongs of people as they commuted around the city. It would mean clearer roadway access for horse-drawn carriages. Streets would become cleaner by default, reducing the strain on merchants, whowere charged with maintaining their little plots of land. Moving sidewalks would lessen the burdenon all Chicagoans, automating a difficult part of daily life. Speer received a patent for his invention in 1871, and it was built for the 1893 World’s Fair inChicago. It spanned the length of a 3,500-foot pier, was made out of wooden planks, and movedpeople at the speed of two miles per hour. There was a secondary platform—an express lane withbenches—that was twice as fast. At five cents per ride, Speer’s wooden sidewalk could transport31,680 people an hour.10 Speer’s invention caught on, because it solved a fundamental human need—to move around a citysafely and quickly—and because it made life easier for everyday people. Enthusiastic engineers andcity planners started building on Speer’s original invention and improving it. The trottoir roulant,launched in Paris at the Exposition Universelle of 1900, had three tracks and moved slightly fasterthan Speer’s sidewalk.11 All along its two-mile track, the trottoir amazed passengers and irritatednearby merchants, since the clanging of its machinery was so noisy.12 There were soon plans to buildan elevated moving sidewalk in Manhattan that could travel nineteen miles per hour in order torelieve foot-traffic congestion down below. There was yet another plan to create a moving walkwayin a loop system over the Brooklyn Bridge. Entrepreneurs proposed similar moving sidewalks inBoston, Detroit, Los Angeles, Atlanta, and Washington, DC.13 By 1905, everyone thought that thefuture of transportation was a sidewalk in perpetual motion. Of course, none of those later projects came into being. For one thing, Chicago winters can beharsh—and there were no contingency plans for how to operate (much less ride) a moving sidewalkduring high winds and lake-effect snow. The original wooden models were extremely loud andrickety, and they suffered from ongoing mechanical problems. Rather than rejoicing, merchantscomplained that the sidewalks were a disruption. The moving sidewalk was an exciting new technology, as was the newly invented “safetybicycle,” which allowed for better steering and greater speed and was deemed suitable for a womanto ride.14 And yet out on the fringe, a handful of engineers had started to experiment with gasoline andinternal combustion engines. In this world, which, remember, was entering the golden age of bicycles,most people couldn’t conceive of riding in a 1,200-pound metal box that could whiz down the streetat forty miles per hour. It is hard for us today to appreciate the excitement that moving sidewalks generated in the latenineteenth and early twentieth centuries. We effortlessly wheel our carry-on luggage from the ticketcounter at the airport to the gate without even thinking about the fact that our “moving sidewalks”aren’t actually sidewalks at all. It’s invisible infrastructure to us now. You may have assumed that when Chicago abandoned its moving sidewalks and when the


Airphibian failed to go commercial, those trends were dead. In fact, moving sidewalks and flyingcars were never trends in and of themselves. Rather, they were manifestations of something different:a trend in autonomous travel. We have been trying to automate transportation for the past one hundredyears, inventing everything from streetcars pulled by horses to advanced machines that essentiallymove without much of our direct input or supervision at all. WHAT IS A TREND, EXACTLY?Technology has made it difficult to distinguish between a trend and something that is simply trendy,and that is because much of it is complicated and confusing. Or it is invisible. It’s easy to fixate onwhat’s trendy—the latest app, the newest gadget, the hottest social network—but more difficult totrack how technology is shaping our organizations, government, education, economy, and culture. Atany moment, there are hundreds of small shifts taking place in technology—developments on thefringe—that will impact our lives in the future. A trend, therefore, is a new manifestation of sustainedchange within an industry, the public sector, or society, or in the way that we behave toward oneanother. A trend is a starting point that helps us to simultaneously meet the demands of the presentwhile planning for the future. In a sense, trends are the analogy our minds need to help us think aboutand understand change. All trends intersect with other aspects of daily life, and they share a set of conspicuous, universalfeatures. Here is how those features—present in all trends—relate to the trend of autonomoustransportation: A trend is driven by a basic human need, one that is catalyzed by new technology. As societyevolves, we require progressively advanced technologies to serve our busy lifestyles. We need tospend less time traveling from place to place. With increasingly busier schedules, we are travelingmore, which makes our roads, trains, and airports more and more crowded. Better transportation andtechnology results in more meetings, events, and opportunities, which cycles back to a need foradditional travel. We experience “road rage” and a lack of freedom to use our time as we choose. Orwe try to multitask as we drive, answering email messages, participating in teleconferences, orclicking through mobile apps.A trend is timely, but it persists. Transportation has always been about fulfilling our need forefficiency, speed, and automation. Could the army of King Ennatumh, one of the ancient Sumerianrulers, have defeated his enemies and conquered the city of Umma without having invented thechariot? Possibly, but having access to a wheeled cart meant faster attacks on the battlefield, and italso spared their soldiers and horses from the exhaustion caused by carrying heavy loads. Forthousands of years, we have been trying to lighten our loads and move around more quickly.A trend evolves as it emerges. We didn’t progress in a straight line from wheeled carts to self-drivingGoogle cars. Technological innovations lead to new ways of thinking, and, as a result, new kinds ofvehicles. Each iteration includes learning from past successes and failures. In the 1950s, General

Protocol Test Harness Crackberry 2


Motors and RCA developed an automated highway prototype using radio control for speed andsteering.15 A steel cable was paved into the asphalt to keep self-driving cars on the road. Theautonomous-vehicle trend evolved over the next six decades, and Google eventually put a fleet ofself-driving cars onto the roads of Mountain View, California, and Austin, Texas. Google uses analgorithm-powered driver’s education course: the cars “learn” to sense and avoid things, like ateenager riding on a skateboard, that they haven’t been explicitly taught to recognize. A decade fromnow—in the late 2020s—will we all own and drive Google cars that are just like the ones beingtested now? Probably not. But we will no doubt be driving cars that require significantly less of ourattention and direct supervision behind the wheel as a result of Google’s research.A trend can materialize as a series of unconnectable dots that begin out on the fringe and move tothe mainstream. With the benefit of hindsight, here are several dots from the year 2004 that, at firstglance, don’t seem to connect: • The Defense Advanced Research Projects Agency (DARPA), the arm of the US Department of Defense that’s responsible for creating the future of military technology, launched a Grand Challenge for fifteen self-driving military vehicles, which had to navigate 142 miles of desert road between California and Nevada.16 • That same year, the R&D department at DaimlerChrysler was studying the future of telematics—that is, the system of sending, receiving, and storing information related to vehicles and telecommunications devices (like a GPS)—and intelligent transportation systems.17 • Motor Trend’s Car of the Year was the brand-new Toyota Prius, praised by reviewers for its newfangled computerized dashboard. The magazine said that “the cockpit may look like it came from a NASA clean room, but the Prius is as easy to use as a TV. Press ‘Power’ to bring the vehicle to life, select ‘D’ with the joystick, press on the electronic ‘drive-by-wire’ throttle pedal, and you’re off.”18 • Google acquired the digital mapping company Keyhole.19 • Developers at Google were working on an early version of an advanced mobile operating system called Android that would be aware of the owner’s location.20 • Another operating system, this one for cars, was built by QNX and soon acquired by Harman International Industries, which wanted to expand the technology for use in infotainment and navigation units.21 How do these dots connect? Technology begets technology. Just as the trottoir roulant built onSpeer’s invention, DARPA learned from the original experiment by General Motors and RCA inautonomous cars, adding telemetry and computerized controls. Separately, car manufacturers likeToyota were building electronic and “drive by wire” technologies to replace mechanical linkages.Eventually, Google could incorporate all of this work, combine advanced navigation into a new kindof operating system, and develop and test its first fleet of self-driving cars. It’s easy to connect thedots in hindsight, but with the right foresight you can identify simultaneous developments on the fringeand recognize patterns as they materialize into trends.


SORTING OUT REAL TRENDS FROM RED HERRINGSUsing autonomous transportation as the trend, how could we build alternate narratives—the probable,the plausible, and the possible—for the future? Scenario #1: Probable. A hybrid system of semi- and fully autonomous transport utilizing a gridsystem covering the 3.9 million miles of public roads in the United States.22 You might drive your carfrom your house to a connected highway and surrender control as sensors in your car link to othercars in the moving network. From there, your car would drive itself, monitoring and maintaining safedistances. Three minutes before your exit, the navigation system would ask you to prepare to driveagain, and you would take over the controls once you’ve turned off the highway. These mostly autonomous cars won’t achieve vertical lift. That’s because the elements that makecars safe to drive either don’t matter in the air or become a hindrance. For example, a three-ton carwould require 3,000 pounds of thrust from the engine just to get off the road, and the majority of ourpaved roads would buckle under the pressure. Now, what if we add additional weight? Today, whenwe make a big trip to the grocery store or pack a lot of suitcases for vacation, a car easily handleswhatever we put into it. That would not be the case with a flying version—before takeoff, either thedriver-pilot or a highly advanced computer system would need to carefully calculate and balance theweight load. You could argue that one hundred years from now, we will have innovated a workaroundfor this problem. But does a flying car solve an inherent human need? Though that still begs the question: Why bother with a flying car at all? There have beenimportant changes to transportation since the Sumerians first built horse-drawn carriages andLeonardo da Vinci borrowed from a bird’s anatomy to sketch his flying machine, but more often thannot they have been incremental.Scenario #2: Plausible. The airspace overhead, as noted earlier, will eventually be regulated toallow for recreational and commercial drone flight. In the next one hundred years, we could have alayer on top of those zones, in the 2,000- to 3,500-foot range, designated for semi-autonomous humantransport. I’ll call that lane a skyway, and the vehicle a disc. Just like autonomous drones, which areprogrammed using GPS coordinates and powered by a collision avoidance system, three sizes ofdiscs—single occupant, up to four occupants, and up to eight occupants—could transport us frompoints A to B using the shortest, safest route available. As with our cars in the future, we mightsubscribe to a skyway service rather than owning a disc of our own. Skyway access points wouldprobably require a short ride up 140 stories to a disc pad, which would occupy the roofs of existingbuildings. An artificially intelligent operating system would load our tastes and preferences,automatically adjusting the climate control, seats, ambient lighting, and music to our liking. We coulduse skyways for longer-distance travel, and highways for shorter distances within or between cities. But still, what does the flying disc problem really solve? For those who currently commute, if asensor network eradicated traffic so that they could reclaim that time for another purpose, would aflying disc really matter? Would skyways of the future prove safer than the highways we alreadyhave? For those who use public transit, would a disc traveling as the crow flies justify spending a lotof money on a new kind of vehicle and the infrastructure to operate it? Would the average personcome to rely on it as a necessity?


Scenario #3: Possible. What if we could move around without any vehicles at all? A few years ago,Dutch physicists at Delft University of Technology’s Kavli Institute of Nanoscience successfullycompleted experiments in something called “quantum entanglement.”23 Einstein had once dismissedthe concept because, as he put it, “physics should represent a reality in time and space, free fromspooky actions at a distance.”24 Up until recently, quantum entanglement was only a strange theory inquantum mechanics. But a physicist at IBM, during an annual meeting of the American PhysicalSociety in March 1993, confirmed that it was possible.25 Electrons orbit an atom much like the Earthorbits the sun. Just as the Earth spins on an axis, so do electrons. When they become entangled—whenthey’re made to smash up against each other—and then separated again, the spin reverses. In essence,the two electrons are converted into mirror images. What the Kavli researchers discovered was away to change the spin of two particles separated at a distance. The results in their study showed areplication rate of 100 percent.26 You’ve probably already heard about quantum entanglement by its sci-fi name: teleportation (as in“Beam me up, Scotty”27). Just as it would have been inconceivable for those early moving-sidewalkengineers to imagine a future that included space travel, in our present-day context we can’t fathomactually beaming ourselves up. For one thing, we would need to build a machine capable of sendingthe data for all 7 billion billion billion atoms that make up the average human body (that’s 7 followedby 27 zeros) the same way the Kavli scientists have for single particles. Naturally, our minds wanderto bugs, hacks, and firmware updates, each of which could throw an error into your system, such assmashed cells that would render you a blob of goo on the other end of your journey. You would haveto get over any squeamishness about whether the replicated person on the other end would even reallybe you. Technically, the person who comes out on the other side would be a replica—a copy,essentially—of what you were when you went into the teleporting machine. On the destination sidewould be your atoms, rearranged in exactly (hopefully exactly) the same way they were at thebeginning of your teleportation. But again, that’s using the frame of reference we know without digging deeper or factoring in thenotion that technology itself causes the acceleration of technology in weird and wonderful new ways.Perhaps two hundred years from now, we won’t physically travel at all. Instead, we’ll leave ourphysical bodies behind and instead teleport our minds into human blanks elsewhere. Just as we rentand share cars to get around today, we’ll pick up a body near our destination. Visual presets willoverlay our physical characteristics on the blank so that others recognize us.These are only three scenarios for the future of transportation, and none include a car with wings.This gets to the heart of why, when thinking about the future, it’s important to see around the cornersof established thought, and why we cannot dismiss what appear to be only incremental changes.While there are a few people working on flying cars, many more researchers on the fringe arebuilding on earlier innovations as they create alternative hypotheses, models, and plans. Autonomous transport—not flying cars or moving sidewalks—as we’ve seen, is the trend tofollow. But what does that mean? Especially given that we’ve been thinking about moving sidewalksand flying cars for a century or more?


A trend doesn’t materialize overnight, and it shouldn’t be conflated with something that’s new and“trendy,” as it’s more than the latest shiny object. A freshly anointed unicorn like Uber valued at $1billion or more may be part of the trend, but on its own, it’s just one data point to consider. In theyears to come, you will no doubt hear about several flying-car startups readying vehicles for market.They’re exciting, but they’re a red herring. We’ve never needed a flying car. We’ve just needed asystem of transportation that corresponds with the needs of our current lifestyles. In the United States,that will mean getting around while devoting less of our attention to driving, so we can dedicateourselves to other tasks. Fundamentally, a trend leverages our basic human needs and desires in a meaningful way andaligns human nature with emerging technologies and breakthrough inventions. We need to think abouthow a trend might develop into the farther future: Will scenario #2, flying discs, complement a busierand more tech-infused lifestyle significantly better than scenario #1, automated cars and highways?Probably not. If we’re intent on forecasting the future of transportation, and we know that autonomy is a trend,then we need to abandon the comfortable analogy with which we’re already familiar. In other words,why are we still trying to build cars that fly—an idea that sounded exciting in an era when airlinetravel wasn’t yet available to everyone, when cars were still too expensive for the averagehousehold, and when meetings only happened face-to-face? We will someday have the freedom to move around, unencumbered by traffic or other people, inan automatically moving vehicle. Our transportation of the future will arrive; it just won’t lookanything like what we saw on TV and in the movies. HOW SOON IS THE FUTURE?“The future” is a meaningless distinction, especially when it’s used to make projections or decisions.The future is simultaneously three hundred years, one decade, twelve months, two days, or forty-seven seconds from this very moment. As soon as you start talking about the future of something,everyone will undoubtedly want to know how soon the future will get here. It is helpful to organize the evolution of trends along six general time zones, which I will define injust a moment. They are not arbitrary; they follow the pattern of acceleration across various sectors ofan ecosystem, which includes academic, scientific, and DIY researchers; tinkerers and hackersbuilding and breaking systems; grant makers and investors; government regulators; equipmentmanufacturers; supply chain managers; marketers and advertisers; early adopters; and everydayconsumers. Progress in science and technology accelerates in part because of the technology itself. Moore’sLaw is the best-known expression of this phenomenon. Gordon Moore, the cofounder of Intel, wrotein 1965 that the number of components on integrated circuits would double every two years as thesize of transistors shrank.28 This simple, elegant projection became the golden rule for the electronicsindustry, enabling Intel to make faster, smaller, and cheaper transistors and tech innovators to planahead for computer processing power that would double in capacity every year. As a trend moves through time zones, progress increases exponentially, in part because ofMoore’s Law, and in part because of an even more broadly applicable law: the very acceleration of


change causes the rate of technological change to accelerate. For example, we can organize ourthinking about the autonomous travel trend along the following six time zones: Now: within the next twelve months. Within a year of the date this book is released (that is, roughly by the end of 2017), cars will be equipped with software updates and new sensors that perform more functions for the driver, such as parking and adaptive cruise control. Near-term: one to five years. By 2022, most cars will be equipped with cross-path cameras to sense nearby objects, and they will have adaptive cruise control for driving in stop-and-go traffic. Mid-range: five to ten years. By 2027, advanced GPS and LiDAR (light detection and ranging) technology will transmit your vehicle’s location and recognize other vehicles sharing the road. This technology will begin to enable cars to drive themselves. Long-range: ten to twenty years. By 2037, the highway system will have been upgraded to work in symbiosis with semi- autonomous vehicles. Human drivers will take over on smaller streets. On mandated autonomous highway lanes, people will be free to read, watch videos, or conduct work. Far-range: twenty to thirty years. By 2047, we will no longer own cars and cities will no longer operate buses. Instead, automated vehicles will be subsidized through taxes and offered at no cost. Those with the means to do so will subscribe to a transportation service, which operates pods that are fully automated. Pods will transport us to destinations as required. Distant: more than thirty years. By 2057, pods will be connected to a series of high-speed maglev trains, which have started to supplant commercial air routes on the east and west coasts of the United States. These time zones are deliberate, and my assignment of them to the autonomous-travel trend isn’t astab in the dark. Rather, they are based on what I currently know about how technology is advancing. It is important to keep time zones in mind when thinking about trends. When envisioning the future,we can’t draw a straight line starting from today to some distant time. Instead, we must account for thefact that the rate of progress is influenced by what’s happening right now as well as what will likelyoccur in the future. If we consider the trajectory of autonomous transport, we can see that therewasn’t a lot of progress between 1950 and 1980. Between 1980 and 2005, cars were influenced bythe adjacent industry of computing, so we saw subtle—but important—shifts away from manuallyoperated to automatic vehicles. There has been tremendous innovation between 2006 and today, fromtouchscreen dashboard systems to fully electric vehicles to prototypes of driverless cars. That’sbecause trends are influenced by the compounding acceleration of change in technology. We are continually interrelating the past, present, and future time zones within the context of ourpersonal experiences, the groups we belong to, and the projects we work on. If you aren’t thinking interms of time zones, you cannot effectively plan for the future. TREND INFLUENCERS: THE TEN SOURCES OF CHANGEBecause trends and time zones are a different way of seeing and interpreting our current reality, theyprovide a useful framework for organizing our thinking, especially when we’re hunting for theunknown and trying to find answers to questions we do not yet even know how to ask. The task offollowing shifts in technology should not be under the purview of a company’s R&D team alone; nor


should it only excite enthusiastic, tech-savvy Millennials. Changes in technology affect all of us,regardless of our personal or professional calling. Everyone within an organization should be awareof trends. Technology trends not only impact emerging platforms, code, devices, and digital workflows, butalso influence societal change. We can see how seemingly small tech trends and forces in othersectors can result in great consequences by revisiting the Sony hacking scandal. Earlier, I explained how Sony suffered its first major breach in 2005. In the decade leading up tothe 2014 attack involving Sony Pictures, there were a number of adjacent trends and social changes tonote: • Internet users were starting to gather in unusual places online. A relatively new imageboard website called 4chan, which was essentially a message board where photos could be posted for discussion, had launched recently, and its audience was growing. Another message board, called Reddit, launched as a sort of marketplace for ideas. Its design wasn’t intuitive or immediately easy to use, so only those fluent with online communities—a lot of them were hackers—signed on to talk and share links. It was all anonymous, and a significant portion of the posts had to do with gaming. • Hackers, typically sole actors, were starting to collaborate on bigger projects. A distributed group of activist hackers (or “hacktivists”) calling themselves “Anonymous” were starting to organize on 4chan. • US citizens had grown distrustful of the government, questioning the White House’s official position about weapons of mass destruction in Iraq and giving then president George W. Bush a 60 percent disapproval rating.29 Hurricanes Katrina and Rita devastated communities around the Gulf Coast, eliminating hundreds of thousands of jobs.30 There was widespread, harsh criticism of the government’s response, which was riddled with mismanagement of resources, politicking, and miscommunication. Bush’s final approval rating by the time he left office would fall to just 22 percent, the lowest rating for a president in seventy years.31 • The US subprime mortgage crisis had caused a nationwide banking emergency; millions of people had lost their savings, their jobs, and their homes. • Occupy Wall Street, supported in part by Anonymous, gathered in Zuccotti Park and began the Occupy Wall Street movement.32 • Everyday people were angry and had started directing their ire at the government, large banks, and corporations. Hackers were angry as well, and they now had a coalition of supporters behind them. Sony could have seen trouble brewing, if not from internal IT audits, then from a growing shift inhow the heaviest users of its gaming consoles were behaving online and their mushrooming angertoward the government and corporations. To wit: there have been dedicated, anti-Sony subreddits (orchannels) on Reddit with posts dating back to 2009.33 Technology does not evolve on its own, in a vacuum. Even the most forward-thinking innovatorsare still grounded in reality, tethered to other areas of society. Trends are subjected to and shaped byexternal forces. Just as it’s useful to organize our thinking along a chronological path through timezones, it’s important to categorize the various dimensions of our everyday life, with technology as theprimary interconnector: 1. Wealth distribution: Even as technology becomes more accessible, a ubiquitous, persistent


digital divide will impact our future wealth distribution. The highest-paying jobs in America are theones for engineers, computer and information systems managers, and doctors and surgeons, and all ofthose occupations rely on technology.34 In fact, technology is even disrupting the livelihoods withinthose groups. A provision within the Affordable Care Act, which was signed into law by PresidentObama in March 2010,35 requires that all health-care providers use electronic medical records(EMRs).36 That may sound simple enough, but consider the following: In order to create an EMR, bylaw a doctor must attach a special code to the diagnoses she makes from the most recent version ofthe World Health Organization’s International Statistical Classification of Diseases and RelatedHealth Problems (otherwise known as the ICD-10). There are currently 68,000 possible codes.37Because the system is so complicated, doctors must use an approved EMR management system inorder to meet the standards of the law. This requires computers, installation, and a significant layer ofencryption. EMR management system interfaces—the screen that the doctor uses when she sees herpatients—aren’t standardized, and they are wildly complicated. In order to handle all those codes, theinterface tends to be a very long decision tree with hundreds of radial buttons. The companies thatmake these EMR management systems have to keep their own systems updated, so they are constantlytinkering with where all the buttons and windows are. This means that doctors, who went to medicalschool to treat people—not for tech support—must continually subject themselves to a very high levelof computer training to do something that for centuries required only a notebook and a pen. You mightargue that doctors, who are at the upper echelons of the pay scale, don’t deserve our pity—but thepoint I’m making is that technology impacts wages and wealth across the entire spectrum. 2. Education: After decades of falling behind other nations on our science and math scores,American schools have redoubled their STEM (Science, Technology, Engineering, and Math)curricula. The US Department of Education cited “an inadequate pipeline of teachers skilled in thosesubjects,” and in 2011 a federal program was created to address the gap.38 Technology doesn’t justaffect students—teachers and administrators must now use electronic recordkeeping systems for testsand grading. 3. Government: There are no fewer than forty-five different US government organizations andagencies managing dozens of cybersecurity initiatives right now. Those include the NationalReconnaissance Office, the Department of Homeland Security, the White House, and Congress, amongothers. Fifteen executive departments—from the Department of Health and Human Services to theDepartment of Transportation—lead policy creation and manage programs that affect daily life inAmerica.39 Locally, state and municipal governments rely on technology to deliver services and togovern. There isn’t a facet of our modern-day government that does not intersect with science andtechnology trends. 4. Politics: Our elections are now powered by coders and engineers, who are using data,predictive analytics, algorithms, and protocols to get out the vote. They build micro-targeting modelsin order to appeal to each constituent individually, and they mine our data using special software tofind potential supporters. Those who want to sway political opinions use the same tools andtechniques no matter whom they are targeting, whether it is voters or lawmakers, and they include


lobbying groups, trade associations, and the like—even other governments. Citizen activists arebanding together and using online petitions and social media to make sure their voices are heard. 5. Public health: Cognitive computing platforms are helping public health researchers predictand map the health of our neighborhoods and communities. Predictive models are able to track thepotential spread of new biological threats, while emerging science technologies will soon enable usto eradicate the spread of certain diseases, such as malaria. 6. Demography: Understanding how our population is shifting—our birth and death rates, income,population density, migration, incidence of disease, and other dynamics—is core to managingeverything from big business to city governments. Many countries, including Japan, Italy, andGermany, will soon face rapid demographic shifts. In Japan, one in four people are now age sixty-five or older—there aren’t enough people working to support both retirees and children.40 Scienceand technology will eventually stand in for the lack of people: robots will assist with elder care,transportation and other services will become more automated, and tracking systems will help toofew health-care providers keep tabs on their growing list of patients. 7. Economy: Science and technology trends intersect with all of the major economic indicators,whether that’s durable goods, retail trade, or residential construction. Automated systems will beginto disrupt the workforce—yes, robots will take some of our jobs. But new jobs will also be createdin the process. High-frequency trading firms have changed how Wall Street operates, and that has hadan impact on our markets. 8. Environment: Technology is both harming and improving our planet. Our techno-trash and e-waste are piling up both in landfills and across our oceans. Our cars may be increasingly eco-friendly, but the factories where they are manufactured still pollute our atmosphere. And yet there areinnovative new technologies to help solve these problems, such as bioluminescent trees, which couldsomeday replace our streetlamps at night. Researchers are already thinking about terraforming Marswith 3D printed microbes from Earth. There is an entire field of synthetic biology that aims to makelife programmable: in 2010, biotechnologist Craig Venter created Synthia, a synthetic cell—essentially, the world’s first living organism to have a computer as its mother. Venter believes theinnovation will eventually help to transform our environmental waste into clean fuel and allow us tomake vaccines more easily, among other things.41 9. Journalism: How do we learn about the world around us? Newsgathering, publishing, andbroadcasting are inextricably tied to the internet, our computers, our mobile devices, algorithms, andthe cloud. Emerging technology platforms control not only what news you see, but when and why yousee it. Emerging platforms and tools have enabled journalists to be better watchdogs and to effectgreater change. 10. Media: Our individual and collective use of social networks, chat services, digital videochannels, photo-sharing services, and so on have forever changed how we interact with each other.


There are boundless opportunities to produce our own content, to band together with like-mindedpeople, and to share our thoughts and ideas in real time. We influence each other on Snapchat andInstagram, but we also have the power to change national conversations outside the traditional newschannels. Look no further than the Center for Medical Progress, an antiabortion group, and its widelypublicized undercover videos. Activists recorded Planned Parenthood officials, edited theirstatements, and then crafted a series of wildly inaccurate stories about fetal tissue trade. The videoswent viral across YouTube, Facebook, and Twitter, eventually sparking lawmakers to propose thatPlanned Parenthood lose its federal funding.42 If we want to forecast the future of anything, we need to plot out these intersecting vectors ofchange—their direction and magnitude—as they relate to new developments in emerging technology.The first Sony hack may have been the result of a compromised codebase and an enterprising hacker.But people play Sony’s games, buy its hardware, and watch its movies. Therefore, anyone seriouslyconcerned about cybersecurity at Sony should have also considered how developments in media, theeconomy, government, politics, and wealth distribution would play a future role in people’s attitudes,behaviors, and actions toward the company. Like Sony, you might be tempted to decouple trends in technology from the ten modern sources ofchange listed above, but in our information age they are very much intertwined. Consider how theseseemingly disparate developments in Japanese demographics have become the catalyst forbreathtaking advances in robotics: Modern Sources of Change: Demography, Public Health, Wealth Distribution, Government. Context: One-quarter of Japan’s population is now sixty-five or older, and no amount of policymaking will suddenly cause a mass of working-age people to materialize. No economic regulations will solve for the expansive hole in tax revenue. No social welfare policy will result in enough highly trained home health-care workers becoming available to serve this enormous group of people overnight. Technology Trend: Robot-assisted living. Implication: Within a generation, there will not be enough people to make Japanese society work as it does today. Anyone interested in the future of robotics would be wise to look not to Silicon Valley, but instead to universities and R&D labs in Japan, where there is extensive research underway. Out of necessity, robots—mechanical systems, artificial intelligence, and automated services—will act as productive, efficient stand-ins for a Generation X that simply wasn’t big enough. Or how understanding the future of terrorism necessitates following trends in digital media: Modern Sources of Change: Media, Government, Education. Context: Terrorist groups are using social media channels to recruit new members in plain sight. Soldiers are outfitted with guns and smartphones and trained to shoot both bullets and video. They maintain and operate active Twitter, Facebook, YouTube, Instagram, and Tumblr accounts. Technology Trend: Hackerrorists. Hactivists, while often destructive, are hacking for what they perceive to be in the public interest. Hacker-terrorists, “hackerrorists,” will use the online world in order to evoke real-world acts of terrorism. Implication: Hackerrorists are digital media experts. They’re quick to adopt new social networks and create a presence in them,


and like a hydra, when one social media provider suspends service to a terrorist group, countless new accounts immediately pop up. Hackerrorists are also adept at using “dark nets,” niche online spaces promising anonymity and the kinds of encryption tools that hackers favor. Fighting terrorism in the future will mean creating highly sophisticated, algorithmically personalized digital propaganda. Law enforcement will need to hunt down terrorists in the dark web to disrupt and disarm them. Governments will have a complicated encryption battle to fight, since the various groups won’t exactly be using a standardized set of tools. A GOOD TREND IS HARD TO FINDIf we know what a trend is, that trends connect to the ten modern sources of change within society andshare a set of common characteristics, then why are they so hard to spot? I’ll answer that question byasking another: Why can you instantly envision what the “Jetsons’ car” is? You still know it was aflying car even if you weren’t alive to watch the Jetsons in 1963 and don’t remember that the car’sbase was green, that it was controlled by a single joystick, that it had modular red bucket seatscapable of moving around the cabin, or that it folded up into George’s briefcase before he—wait forit—hopped onto a futuristic moving sidewalk that whisked him into the office. Now, if I were to ask you to imagine a new kind of micro-electromechanical sensor for stabilitycontrol that will connect to an upgrade in a car’s operating system for . . . there’s probably no needfor me to go on, right? How can even the most significant innovation in a microchip or sensor evenbegin to compete with that visceral image we all have of the Jetsons’ car? Sometimes, trends are really boring, so we don’t pay any attention to them. If we want to plan forthe future of transportation, we would need more than an artist’s sketch and a memorable story. Inorder to figure out the future of how we will move around—a Google car? flying discs?teleportation?—we would need to consider at least some of those ten modern sources of change,which aren’t quite as captivating as a Jetsons’ cartoon car: Government: Are new laws regarding drones being drafted today that might intentionally or implicitly change how we will operate vehicles on roads or in the sky? Politics: Are companies in adjacent industries lobbying elected officials for anything unusual, such as a larger budget for the Highway Trust Fund? Are they advocating for investment in sensor technologies? Economy: Will unpredictable gas prices amid a tepid job market have weaned us away from driving in the near future? Will the largest auto manufacturers need to develop new profit centers in response? Media: Auto manufacturers are integrating mobile phones into the driver experience. Some manufacturers offer platforms that enable drivers to connect their social media accounts to a car’s dashboard. Will consumers demand even more digital functionality and information as they grow accustomed to these newer systems? Will they care less about driving and more about connecting with media while in a vehicle? Public Health: In 2014, more than 3,000 people were killed and 431,000 were injured because of distracted drivers.43 We are increasingly dependent on our devices. Will car crashes increase such that distracted driving is considered a public health epidemic? Demography: Research has shown that Millennials place much less importance than earlier generations did on getting a driver’s license and owning cars. In 1983, 46 percent of sixteen-year-olds in the United States got their driver’s licenses. By 2014, that percentage had dropped to just 24.44 Researchers blame factors such as e-commerce, which decreases the need to drive in order to shop, and social media, which enables people to gather together in digital environments. Millennials who do have licenses


are increasingly participating in sharing platforms like Zipcar and Citibike. By the time autonomous technology has advanced sufficiently for widespread use on highways, will Millennials have caused a permanent shift in society’s attitude toward car ownership? We misidentify trends (or miss them altogether) when we focus exclusively on technology, whenthe other factors in play are seemingly unrelated, or when the adjacent sources of change aren’t partof a compelling narrative. Forecasting the future doesn’t always yield headline-worthy results, even ifcertain trends promise to change how we live on this planet. Early on in The Graduate, the characterplayed by Dustin Hoffman is offered an insightful glimpse into the future, when Mr. McGuire says justone word to him: “plastics.”45 In 1967, plastics were actually a pretty exciting subject, if you recognized the signals. Theaudience winced at McGuire’s misguided passion about industrialization, conformity, and mass-market goods. But the near future of plastics made personal computers a reality and turned free-spirited Baby Boomers into wealthy, Wall Street Yuppies by the time they were forty. Plastics alsospawned the plastic water bottle craze that eventually created the Great Pacific Garbage Patch, anocean wasteland predicted in a 1988 paper published by the National Oceanic and AtmosphericAdministration46 and later discovered by Charles J. Moore in 1999.47 Sailing home from a race,Moore encountered an impassible stretch of plastics: bottles, toothbrushes, and millions uponmillions of unidentifiable plastic fragments. A lack of planning in 1967 still impacts us fifty yearslater: in the 2010s, no scientist, environmentalist, or governmental agency has been able to establish aglobal policy for dealing with humanity’s massive, and still growing, plastic footprint. Often, future game-changing trends enter society without attracting media attention or interest fromthe general public during the early years of development out on the fringe. Just as it took more than adecade for the Great Pacific Garbage Patch to become visible, another important techno-socialphenomenon—global adoption of the mobile phone—wasn’t noticed by most people for sixty years. Phones became “mobile” in 1947, when AT&T researcher Douglas “D. H.” Ring discovered anew method of linking phones to a car’s antenna.48 There weren’t many subscribers to the service, butfor nearly four decades, the car phone service was available in a few cities around the United States.In the 1960s, AT&T perfected that original technology via cellular towers that could hand off calls asa driver moved around. The first commercially available mobile phone not tethered to a car was soldin 1984.49 Just ten years ago, you or someone you know was probably still using a flip phone with anantenna and, quite possibly, a matching leather sleeve. Checking your voice mail meant dialing *86,waiting for a prompt, pressing the # key, waiting for another prompt, then entering your PIN code andpressing # again. Taking a photo required using the keypad to get to the home screen, pressing “a,”then “h,” to get to another screen, and pressing another series of keys to take, name, and store thephoto. Convenience trumped the complicated menu system. Mobile phones engendered an always-onlifestyle and entered our collective conscious. If you’re like most people today, you rely on your flat-screened smartphone to send work emails,shop at the grocery store, or order a car from a subscription service like Uber or Lyft. This change isbreathtaking, but it doesn’t feel like it to us today. That’s because we’re living in the midst of thatchange. To evolve from a car phone to basic flip phones took six decades. And yet within the next tenyears, our mobile devices will have the computational power of a human brain—the least exciting oftheir features will be their ability to make a phone call. Which is still relatively short, but bear in


mind Moore’s Law and the compounding effect of technological advances. All of those incrementalachievements in cellular networking, user interfaces, and processors tend to get overlooked until theyadd up to something big and recognizable, like the iPhone. It can be difficult to see trends, and especially challenging when all of the changes leading up to atrend’s formation are relatively uninteresting, or when they threaten to upend our established,cherished beliefs. That is certainly what happened with Sony. Yet concentrating your efforts ontracking trends in just one area, without taking into consideration adjacent areas, will lead you tofollow shiny objects, rather than to consider how the ten sources of change are contributing to abigger shift. The future isn’t just some nebulous point in the far-off distance, and so we must thinkabout trends as they relate to different time zones. Because trends align with our evolving humannature and leverage our basic needs, they help us to foresee and forecast change in the future. So how do you find trends? Before I introduce the first of our six instructions, it’s worth showingyou what happens when organizations track trends correctly, as you will see with one well-knowncompany that’s still thriving more than 125 years after its founding. You will also learn the grim fateof a breathtaking organization that, like BlackBerry, failed to acknowledge the change that washappening.


CHAPTER THREE Survive and Thrive, or Die How Trends Affect CompaniesTRENDS CAN BE slow to develop, and we don’t often understand (or we misunderstand) their long-term potential. Trends help us to understand change, which is an essential part of everyorganization’s mandate—or, at least, of any organization hoping to survive more than a decade ortwo. That’s the moral we can take away from the story of two organizations, one that is well over acentury old, and another whose star shone in the firmament for a couple of decades, but ultimatelyflamed out when it failed to listen to the signals.Those who intentionally plan for what’s next—even very large, sprawling organizations—can moreeasily forecast what’s on the horizon and manifest their own preferred futures. That was certainly thecase with a game company founded in Kyoto, Japan, in 1889. You know it today as Nintendo. Nintendo, which brought us Super Mario Brothers, the Game Boy, and the Wii, started out asNintendo Koppai, a small playing-card company based in Kyoto. Founded by businessman FusajiroYamauchi, Nintendo Koppai produced hanafuda cards, which were similar to today’s common set offifty-two playing cards, except that rather than numbers and suits, hanafuda were printed with one oftwelve sets of beautiful flowers.1 By 1953, Yamauchi’s grandson, Hiroshi Yamauchi, had become fascinated with an emergingtrend, in fact, the same one from that iconic scene between Mr. McGuire and Benjamin in TheGraduate: plastics. Nintendo’s paper card business was profitable, but it was far too limited. HiroshiYamauchi had been meeting with the unusual suspects at the fringe and connecting the dots: Theadvent of plastics for commercial use, combined with new manufacturing capabilities, meant that thegaming space would inevitably become crowded. The price of televisions was dropping, andchildren’s cartoons featuring the Looney Tunes and the Mickey Mouse Club were capturingwidespread attention. Walt Disney was getting ready to open his eponymous theme park, Disneyland.Traditional cards, checkers, and chess games would no doubt be replaced by mass-produced board


and card games with new strategies, storylines, and even characters. As he assumed the position of Nintendo president in place of his grandfather, Yamauchi madeseveral bold moves in planning for the future. First, he produced hanafuda cards in plastic.2 Hiscontemporaries at the time no doubt questioned his logic. Plastic cards were far more durable thanpaper. Wouldn’t Nintendo’s sales shrink, since, unlike delicate paper cards, a plastic deck couldostensibly be used forever? With these new operations in place, he took a trip to the United States andmet with Disney executives. In 1959, which was still very early on in Disney’s development, he madea deal to print Disney characters on Nintendo cards. This collaboration opened the playing-cardmarket, which had mostly been targeted to gambling adults, up to children.3 Yamauchi kept listening to the signals and connecting dots. Soon, there were books withexplanations for how to play Nintendo’s new Disney card games. Millions of packs of cards had beensold. By the 1970s, Nintendo had completely saturated the playing-card market, however, and thecompany, which had gone public, saw its stock price plummet. Looking for new efficiencies,Yamauchi toured Nintendo’s factories and met with employees. One engineer had been tinkering witha mechanical arm of sorts—it was an expandable set of crisscrossed plastic, with scissor-likehandles on one end and tongs on the other. Nintendo developed the arm into the Ultra Hand.4 Itmarked the introduction of a new kind of toy, one that was both fun and functional. It sold millions ofunits and paved the way for new products and more experimentation. In the 1970s, early videocassette machines were making their way into households. Meanwhile,computer programmers had been tinkering with electronic games, which had primarily been asimulation of real-world board games. Again, considering future scenarios, Nintendo asked itsengineers to think through other ways in which video consoles might one day be used, both in thehome and in places like restaurants and arcades. What about a television screen at eye level, wheresomeone could stand and play while others watched? Nintendo made another bold move and in 1973 created the Laser Clay Shooting System, a gameintended for everyone to play, not just computer programmers. The following year, it developedanother new technology—an image projection system—and manufactured the hardware for it.Nintendo started selling video-game machines, as well as the games to play on them, throughoutJapan, the United States, and Europe. By 1979, Yamauchi’s son-in-law Minoru Arakawa had movedto New York City to create a base for what would soon become a multinational corporation.5 Nintendo was still very much a company that made games, just as it had in 1889. But it was also acompany that invested in tracking emerging trends throughout gaming and adjacent industries. As aresult, Nintendo developed products that set the course for the future. Soon came game titles likeDonkey Kong, Super Mario Bros., and my personal favorite, The Legend of Zelda. It createdadvanced home console systems like the NES (Nintendo Entertainment System). In 1988, Nintendo’sR&D department was at work on a number of other new projects that melded advances in personalcomputing and wearable technologies with games. As a result, Nintendo created a hands-freecontroller vest with a mouthpiece that allowed quadriplegics to play video games. The followingyear, Nintendo debuted the world’s first portable, handheld game system—the Game Boy—that hadinterchangeable game cartridges and had introduced a little game called Tetris.6 There were plenty of competitors along the way (Atari, for example) as well as hardware andsoftware bugs that at times proved challenging, if not potentially disastrous. And yet, the companycontinued to innovate. At the end of 2006, Nintendo launched the Wii, a motion-sensing game system


that was accessible to everyone, regardless of age or computer experience. It was connected to theinternet, and the controller was a lightweight, wireless handheld wand. To play a game of bowling,you simply held onto the wand and moved your arm, just as you would in a real bowling alley. Togolf, you’d swing your arms and hips just as you would out on the course.7 Nintendo survived—and, for the most part, thrived—for more than 125 years because it listenedto the signals, spotted early trends, and blazed a new trail for the entire gaming industry. Many of theplay control features we now take for granted—not just on rival gaming platforms but also in otherplaces, including our phones and computers—are directly attributable to Nintendo’s foresight. As of 2016, Nintendo was one of the most successful video-game companies in the world, and itwas still the leading playing-card manufacturer in Japan. Nintendo is a company that might have beencrushed by new technologies, changing customer tastes, and upstarts in the entertainment space, butfor one simple fact: it leveraged trends. The same was not true of a company that, had it been listening to the signals and tracking trends,might still exist today. That company was the Digital Equipment Corporation (DEC), and itcompletely missed the advent of personal computers.It’s difficult to imagine it today, but two generations ago, computing was only a fringe experiment—and a rudimentary one at that. The earliest computer engineers were total outliers tinkering on thefringe. The first machines took up the size of a large room—but if we follow the work of those earlyengineers up through the advent of the personal computer, we will be able to see a trend unfold. Andwhat happened to one of America’s most exciting companies when it refused to see how that fringeresearch could go mainstream. In 1937, Bell Labs research mathematician George Stibitz sat at his kitchen table, thinking aboutthe electromechanical relays found inside telephone switching systems. Could they be used for otherpurposes? If they could transmit voice, what about something else, like text? Stibitz decided to starttinkering. Using electromechanical relays, along with some flashlight bulbs and a switch he fashionedout of a tobacco tin, Stibitz built a working prototype for a new kind of machine that could calculatenumbers using binary addition.8 Stibitz kept reworking his machine, adding more telephone relays and crossbar switches.Eventually, it could calculate a long division problem, correctly finding the quotient of two eight-place numbers in just thirty seconds. Stibitz soon found himself with a research program and a team tofurther test and build on his work. In September 1940, at the American Mathematical Society conference at Dartmouth College,Stibitz discussed his Complex Number Computer (CNC), the world’s first electronic digitalcomputer.9 (Then, “digital” referred to the ten numbers 0 through 9 that were used to makecalculations.) Stibitz used a telegraph to send a difficult calculation—one that could not be solved bya person without enough time to do the work by hand—from the meeting in Dartmouth to his CNCback in Lower Manhattan. About a minute later, the Teletype line returned a message showing thecorrect answer. Conference-goers were stunned, having just witnessed this act of computing—as wellas the first time a computer had ever been used remotely. For the next three hours, they called outequations, trying (without success) to stump the Complex Number Computer.10


The demonstration sparked great interest in computers, prompting students and researchers at theMassachusetts Institute of Technology (MIT) and the University of Pennsylvania to experiment withnew designs. Crunching a few numbers was a clever trick; however, they had a more important goal,envisioning a computer capable of doing advanced mathematics remotely using a series of machinesdistributed over a network. War has a habit of marshaling scientific advancements forward. As it happened, while Stibitz wasat Dartmouth showing off his invention, the Luftwaffe had been given orders to target Britishcivilians, bombing St. Paul’s Cathedral and Buckingham Palace in London in the beginning of theBlitz. Then, America’s entry into World War II in December 1941 mobilized funding for research intowar-related technology, including computerization. There were a host of problems to tackle. One hadto do with the inaccuracy of the US Army’s artillery, which required constant recalibration. Thegunners needed to track hundreds of variables, such as wind speeds, humidity, and the kind ofgunpowder supplied. There wasn’t time to calculate all of the conditions—and there were too fewartillery shells to waste on guessing. To solve these and other emerging problems, the War Department decided to fund research on anelectronic computer—the Electronic Numerical Integrator and Computer (ENIAC)—in April 1943.Of all the experiments that came before it, the ENIAC was the first that could multitask, as ourcomputers do today: it was programmable, it could complete any calculation, and it was very fast.11Another computer, the Colossus Mark 1, became operational later that year, and it was used to helpdecrypt German radio teleprinter messages.12 One of the earliest programmers, Grace Hopper, had worked on the Colossus Mark 1 as a navalreserve officer. (In 1985, Hopper would be promoted to rear admiral.) After the war, she and herteam at the Remington Rand corporation had built the first compiler for computer languages, whichallowed computers to use language, not just numbers. It was such an unimaginable feat of engineeringthat initially, nobody inside academia or the military-industrial complex believed that it had beenbuilt.13 In the 1950s, Hopper’s compiler became the basis for the universal standard for computerlanguages, COBOL (an acronym for Common Business-Oriented Language). Meanwhile, computerswere proving useful to the US Census Bureau, the US Atomic Energy Commission, and thebroadcasting network CBS, which used a UNIVAC (for Universal Automatic Computer) to predict alandslide for Dwight D. Eisenhower in the 1952 presidential election.14 It should have been clear that the first era of computing, marked by machines that could calculatenumbers, was giving way to a second era of programmable computers. These were faster, lightersystems that had enough memory to hold instruction sets. Programs could now be stored locally and,importantly, written in English rather than complicated machine code. Moore’s 1965 thesis, that thenumber of components on integrated circuits would double every year, was proving accurate.Computers were becoming more and more powerful and capable of myriad tasks, not just arithmetic. But for the next twenty years, the business world remained skeptical because it couldn’t see thissecond phase of computing as anything more than a fringe research project. In order to sell one of itscomputers, the new manufacturer IBM would often send its chairman, Thomas Watson Jr., to convincedepartment managers that investing in one of its costly computers was worth the expenditure. Even ascomputers shrank from the size of rooms to desktops, few could imagine a day in which someonemight use a computer outside of work. The paradox of the present blinded CEOs to the sweeping


changes taking place before their eyes across the ten sources of change, which included womenentering the workforce. Our economy was globalizing, with many companies opting to do businessoverseas. Universities were starting to offer degrees in computer science. Students at MIT andHarvard had even built rival computerized dating services.15 Computer scientist J. C. R. Licklider, the head of the Behavioral Sciences and Command andControl programs at DARPA, envisioned another dimension to this second phase of computing. It wasa system of computers—maybe four or eight—linked together, using a homogeneous programminglanguage. People might use the system to retrieve a set of data to work on and then share, so thatothers could use it to further their own research. In a memo, Licklider described “linkages” between auser’s programs and those she borrowed. This idea became the basis for ARPANET, the AdvancedResearch Projects Agency Network, which would soon bridge together programmers at the Universityof California at Los Angeles (UCLA), the Stanford Research Institute, the University of California atSanta Barbara (UCSB), and the University of Utah.16 Meanwhile, MIT engineers Ken Olsen and Harlan Anderson imagined a smaller computer meantto be used by just one person. They founded the Digital Equipment Corporation (DEC) and went toproduction on what they called “minicomputers” in the late 1960s.17 They could be used by researchlabs, government agencies, and businesses that depended on heavy computer use by multiple staff atonce. This startup quickly grew to become a market leader, and by the 1980s, DEC would employ120,000 people, reach $14 billion in sales, and become one of the most profitable companies in theUnited States.18 There were already people on the fringe starting to think about a third era of computing, one thatenabled professional researchers and scientists to share data and collaborate on projects. FuturistOlaf Helmer, writing in his seminal 1967 “Prospects of Technological Progress” paper for the RANDCorporation, said that such a “world-wide network of specialists, each equipped with a console tiedto one central computer and to electronic data banks,” would someday “interact with one another viathe computer network” for the purpose of scientific research.19 But almost no one, including Helmer,foresaw the real trend: a third era taking a radically different shape as personal computing. Theycouldn’t see how that very same technology might be desirable for everyday people to send messagesto each other, to read the news and to record history. The failure spelled disaster for many businesses,including DEC. In 1977, Olsen, who had become DEC’s president, said that there was “no reason for anyindividual to have a computer in their home,” and that “the personal computer will fall flat on itsface.”20 Olsen was shackled by his immediate frame of reference. He had built a wildly successfulcompany predicated on small, programmable computers built specifically for businesses andresearchers. However, outside his view of the present, there was a revolution underfoot: Education: ARPANET was fully operational, connecting fifty-seven Interface Message Processors, which served as gateways connecting a growing network of computers—and more importantly, computer programmers. Programmers created Transmission Control Protocol and Internet Protocol, otherwise known as TCP/IP, making it possible to remote-access the network.21 Economy: The marketplace was being invaded by upstarts. Atari and Commodore had personal computer models to show. Two young enthusiasts named Steve Wozniak and Steve Jobs had built a small computer in their Los Altos, California, garage. Adam Osborne, who had been a pioneer in writing easy-to-read technical manuals for computers, built the world’s first commercially available portable computer. Not only did it weigh a mere twenty-four pounds, but it could be used anywhere there was a wall