A funny thing may happen on the way to AI
When we picture the future of technology, scenes from “The Terminator” or “2001: A Space Odyssey” might first enter our minds, showing us the end of humanity or the rampage of psychotic robots. But regardless of how it’s portrayed in films or the number of industry experts (Bill Gates, Elon Musk, Stephen Hawking) warning us of its risks, leading edge technology is growing at an incredible pace and with it, an ever more automated society.
How do we provide for and safeguard our dependence on automation? If cars can drive themselves and air traffic control might one day be performed by Artificial Intelligence (AI), who or what is monitoring the bots? And perhaps most fundamentally: How do we begin to create processes and critical facilities that protect something as non-negotiable as the safety of human life? Could there be an irony in all this: the more automated we get, the more people-centric we also must get?
In recent years, the World Wide Web has become an unbridled multilevel “World Wild West.” Little, if any, regulation has allowed e-commerce to mushroom into a $1.915 trillion global business (in 2016) with the expectation that it will reach $4.058 trillion in 2020, according to eMarketer.
No doubt, the Web’s consumer focus has played a key role in the Web's exponential growth. That along with the Internet's many “layers” has introduced a host of fears that weren’t present 28 years ago, when the WWW was created as an open information platform. They range from concerns about personal privacy to the management of net neutrality.
But here's the game changer, a new fear to add to the list as our world gets increasingly automated and Web dependent: the addition of more critical layers to that Internet, layers with implications for life safety. Things like technology for self-driving cars and AI of any sort.
When human safety is at stake, stricter management of technology and the Web must be introduced. The loose, self-monitoring of the consumer-led WWW (to protect data on the servers and prevent issues from occurring) is no longer good enough. Global agreements must be established to control this sort of technology. And as designers of critical facilities, we need to reconsider how we design the back-end infrastructure where all the Big Data is stored to include this critical element of protecting human life.
Few people think about leading edge technology with that perspective. They tend to focus on the visible advances. And even big tech companies and governments, which are doing considerable research into AI, are missing the criticality of a more immediate future.
We may be on our way to a world supported by AI, but other forms of automation with life safety implications, such as those self-driving cars, are already here. No regulation has been added to the Web to address it, no processes agreed upon globally.
And with that layer of life safety on the Internet is likely to become something else that we as a society are not fully prepared for: a new and different kind of “e-commerce.” A whole load of critical facilities—amounting to potentially billions of dollars in investment—will be necessary to build out this new era of workplace automation.
Think about it: NATS in the UK currently monitors more than 6,500 airplane flights a day with a staff of 685 people and a combination of real-time human and computer analysis.
Now, let’s consider the scenario of 1 million autonomous cars on the road and their need to be monitored in order to similarly ensure human safety. How will that happen? What will the process be? If we use the same ratio of people-to-vehicles as NATS does, it would take 105,384 operators, engineers, and other support staff—and an untold number of new control/data centers and, thus, huge investments in real estate and facilities.
Even finding such a vast number of people who could perform those critical monitoring roles would be a colossal task. Enter AI, which is realistically 10 or more years down the road. What processes must be designed to ensure its safety?
Before such intelligence enters the picture (before machines are left to their own devices, quite literally, and are entrusted to think for themselves), we’re likely to see a mixed system delivering this more automated society. It will have a certain level of “programmed” intelligence and the rest will be left to human management, not unlike the system of self-checkouts at the supermarket. Fields of control centers staffed by people and data centers will be necessary.
But as AI proves safe, big business will want to reduce overhead. A transition will take place from people-based control centers to more data analysis by computers.
As designers of these data centers of the future, we must respond to a confluence of critical matters: new technologies; a new kind of e-commerce; new regulations; and most importantly, a need to protect human life. The back-end infrastructure we design suddenly becomes more “front line” as we harden these facilities and include long-term resilience into their programming.
And the funny thing is these data centers of the future will likely reverse the trend of being populated with fewer people (as the technology has advanced to take care of itself.) With the addition of life safety into the equation, critical facilities will need to be more people-centric than ever before.