“I’m waiting on a train platform in Tokyo with a colleague. Commuters stand around me waiting, staring at their phones to pass the time. It’s 2003, four years before Apple releases the first iPhone. I take a photo.”
Back in 2003, when my colleague took that photo and we showed it to people, they mostly shrugged. They said, “That’s just Japan,” and, “That’s just DoCoMo,” [the Japanese telecommunication company]. Senior people from essentially all major tech companies looked at the photo and basically said, “No, it’s never going to happen.”
I look at this photo now, in 2018, and yes, clothes are slightly dated and the phones are relics, but it looks like any public place in any urban center, anywhere on the planet right now.
Had people looked at the photo and thought, “Actually, there’s something really powerful in the notion of a phone, connected to a network, that knows who’s around you,” they might have had different reactions.
This tale from our recent history is one we can learn from. Though it is about technology, it is essentially human. It is about the things that are changing, and things that are relentlessly stable.
In this article I’m going to give you some examples that I think echo that moment back in 2003 – that train platform that we ought to be looking at and saying, “Maybe there’s something in this.”
Stories from the future
To see just how much different generations architect the same future ideals, we can examine the history of stories from the future. This is one of my personal favourites, appearing in magazines in the US in 1958.
“Your air conditioner, television and other appliances are just the beginning of a new electric age.
Your food will cook in seconds instead of hours. Electricity will close your windows at the first drop of rain. Lamps will cut on and off automatically to fit the lighting needs in your rooms. Television “screens” will hang on the walls. An electric heat pump will use outside air to cool your house in the summer, heat it in the winter.”
We are now living the imaginings of those electricity zealots. Was it inevitable? Or was something set in motion by that passion, that wonder, transmitted through generations until it was fulfilled? And what of those imaginings that aren’t yet a reality? What is it that has delayed them?
Will the robots gossip too?
Not all imaginings are aspirational. Many are fearful. But do we fear the right things?
Let’s start with something banal: the robotic vacuum cleaner. These unassuming black disks happen to be the largest in-store base of robots on the planet currently. When someone tells you the robots are going to kill you, what you need to know is the robots that are coming to kill you are, in fact, robotic vacuum cleaners. An existential threat exists right in your home, ready to clean up last night’s pizza crumbs.
Last year, the CEO of a company that sells robotic vacuum cleaners said publicly that their robotic vacuum cleaners had been mapping the inside of people’s houses and that they were looking at ways to monetise that data. There was outcry and he backed away from it, but the genie was out of the bottle.
These robots – like any objects featuring computation and a radio stack – are hackable. But you don’t need to hack the Roomba to get access to this data. You can just buy it.
Just like that train platform, a robotic vacuum cleaner may not seem like something about which you should say, “Maybe there’s something in this”.
But what if you knew the footprint of a building without ever having to hack anything, without ever having to break any laws, without ever having to decide whether what you knew was right or not, without leaving a footprint they can turn back on you? What if you could just buy the data? Where does that leave you? What does that mean in terms of what becomes vulnerable? What comes knowable? How you think about who else has information that suddenly is effectively able to be weaponized?
There are multinational companies who are building out smart cities, and smart homes, and smart infrastructures. They are collecting vast quantities of data on everyone and everything existing within these worlds, curating digital archives of how we live our lives.
What are they doing with that data, where does it sit, and how is it being made sense of?
What does data tell us about being human?
The world we are moving into is a world that is driven by data sets. There are two key questions underpinning this world: Who has the data? What patterns might the data reveal?
Algorithms, artificial intelligence, and machine learning are all part of a complex of technologies that can provide answers to that second question and use those answers to make decisions. Machine learning is a tool by which you take a large set of data and ask the machine to identify any patterns that emerge. This, as it turns out, is a relatively straightforward process. All an algorithm does is automate what happens when it encounters a particular set of data, or a particular set of circumstances. And artificial intelligence is a constellation of algorithms that can perform machine learning by themselves.
These technologies open up the possibility of automating certain kinds of processes. In other words, we can create machinery that can analyze data much faster than humans can. We can also create machinery that acts based on its analysis of the data we feed it. This is nothing new. Machines like this have already helped break code in World War II, identified long lost friends on your social media platform of choice, and streamlined your latest online shopping experience.
But what does that data represent? Though it holds clues to the future, data is always retrospective: something must happen in order to allow us to collect the data in the first place. Data is always partial: it is only as good as the mechanisms by which it was collected, the circumstances under which it was collected, and your capacity to know whether it accurately represents the world.
Apart from questionable accuracy, this situation means it’s really hard to innovate. Because it turns out that one of this things about human nature is that want to be surprised. Are we creating a world filled with algorithms that only know how to deliver familiarity?
What stays relentless the same?
We look at these two pictures and we think – gosh, what a lot has changed! We now have colour TV, flat panel, remote controls, and a range of content delivery mechanisms (streaming, DVD, VOD, cable, satellite, etc), we watch on computers, laptops, tablets and phones). But I look at this and think, how relentlessly our love of TV has stayed the same – we love gossiping about content, and we love a good story. The bedrock here is solid, the appearances might shift a little but the underlying preoccupation, concern or need persists over decades, centuries, even millennia. Technologies that enable the telling of stories will be sites for change in delivery but not objective.
We need to belong. Social systems are designed around units of social organization – we have an erring need to be nested in some set of social relationships. Humans are social beings.
The implications for technology here – which we perceive as always changing – is that everything, from the introduction of the telephone to the mobile phone is about keeping familial connections, email, sharing photos, social media, digital cameras, etc. Humans as social beings is at the heart of everything.
I’ll leave you with a final word of caution.
There are still seven billion of us on the planet, all seeking connection with the world and each other via product choices, online activities, sources of news and information, and our notions of the world, and all of this is on display in our digital world, easier to both access and manipulate than ever before. In this, there is promise, but there is also vulnerability.
So, as we think about the future that is already all around us, what is this future about?
It is about connected objects, but it is also about the people that own them and the data that goes with them. It is about the data that you’re training things with. It is also about the fact that when push comes to ultimate shove, we’re all humans in this system, and as humans we have social needs, we want to be part of something bigger, we need to have secrets, be surprised and take time out.
When thinking about new technologies, we can never deny these facts.
Professor Genevieve Bell is the Director of the 3A Institute, Florence Violet McKenzie Chair, and a Distinguished Professor at the Australian National University (ANU) as well as a Vice President and Senior Fellow at Intel Corporation. Prof Bell is a cultural anthropologist, technologist and futurist best known for her work at the intersection of cultural practice and technology development.
Maia Gould is the engagement and impact lead for the 3A Institute, and works closely with the team in exploring research collaborations with academia, government and private industry. She completed undergraduate studies in Neuroscience and English Literature, and completed her Masters of Bioethics with a dissertation on the communication issues involved in democratic decision-making on complex and emerging scientific issues.