The Electric Brain
Is a trans-humanist future for humanity inevitable?
If you grew up in the early 2000s like I did, you probably feel like historical trends are catching up with your age. When I was young it felt like humanity had simply ceased to invent new things. Computers, TVs and mobile phones were just slowly getting thinner, and the internet was simply a convenient way to do homework. These technologies were refined every year, but so slowly that it was imperceptible except in hindsight like the hands of a clock. As a teenager, the only technology that seemed to me to show genuine progress was video games.
I remember all kinds of stories about what the future would be like; flying trains, space warfare, invisible tanks, hover-boards, and holograms. Fast forward to 2018 and none of these things have arrived, but we do have GPS, social media, killer drones, self-driving cars, and Alexa. It used to seem like most technological change happened in the realm of entertainment, TV, video games, smart phones and so forth. There did not seem to be anything fundamentally world changing about them. But because technological change is meant to decide what jobs most of us will or won’t be doing in the future, it’s incumbent on us to take it seriously.
It’s a full time job keeping up with developments in even one technology sector, let alone trying to predict a general direction of travel. Another complication is how most modern tech is interlinked, so a breakthrough in one field will have a huge impact in others.
Another difficult task is prying apart real developments from simple rumour, predicting which aspects of technology will become permanent and which are essentially gimmicks. Social media didn’t seem all that exciting when it was meant to be a way to reunite with old acquaintances, and the first mobile phones weren’t pitched as eventually being the mini-computers that we all use today.
Other common predictions of my childhood have been falsified. The world’s natural resources including fossil fuels, fresh water, and trees were all supposed to have run out in the early years of the 21st century. Just a few years ago, it seemed obvious that we would be wearing our computers or phones as glasses and scrolling through them with our eyes. Similarly we were all supposed to be watching movies on 3D TVs, but now all these things may just be for our grandkids to point and laugh about.
No-one writing today wants to sound like the 1960s writers who predicted people jet-packing to work with robot house cleaners inside giant domed cities. The world’s governments haven’t yet been superseded by a giant conglomeration of mega-corporations (a common trope of 1980s fiction), and we still go on holiday in planes designed in the 1970s. We’ve even disappointingly failed to produce an army of clones and order features for our genetically modified babies off a menu as in the film Gattaca.
We can observe many similar fears about the technology of our own time. People will spend their whole time in simulations and never leave their apartments (virtual reality). Giant shadow corporations and secret mafias will spread across the globe (cryptocurrency), and no one will have to work because machines will make everything for them for free (3D printing). If one looks at the history of such predictions, they’ll find that all of them have been made in the past and they were wrong then. One concern arguably worth the coverage given to it by the media is AI.
Killer Robots
Although it can seem like the tech industry hardly talks about anything else, the concept can still be very hard to grasp because no one is sure how the problem of AI will present itself in the future.
Earlier this year, the UN debated a resolution on ‘Killer Robots’, AI that can take the decision to end a life without human input. Six countries have called for an outright ban on development, with several others (the US among them) pointing out that the technology is still too early in development for regulation to be effective.
It’s not hard to see why some countries are uneasy. The US, Israeli, Chinese, and Korean navy are already equipped with weaponry that can automatically detect and shoot down incoming missiles. Google employees resigned in protest earlier this year about revelations of a contract with the US government to help train self-piloting drones. Unmanned underwater vehicles (UUVs) and self-driving tanks are already being developed in the Russian military and will likely proliferate in the 2020s.
While the fears about such developments are obviously justified, what tends to be less considered is the potential that militarised AI has to vastly reduce the number of both soldier and civilian deaths in warfare. Not only can robots take the place of soldiers who might be killed but they could also distinguish civilian from enemy with far greater accuracy than a human. The fear should be of this technology falling into the hands of people who lack such moral concerns.
This is the fear with so called ‘narrow AI’; algorithms that are so efficient at highly specific tasks that they could perform millions of calculations a minute and nullify any tactical advantage that an enemy power might have. We are already experiencing an arms race over this kind of AI and its potential to be deployed in military operations.
Robots are an interesting component to the AI question. While modern robotics is distinct from AI, the machine learning required for them to recognise objects often makes them synonymous. The feasibility of robotic applications often depends on mundane things like convenience of use. Alexa, and Google Home may be cool but it’s still easier to press a button on the remote control. Similarly we seem to have been inundated with robots over the last few years. Even a cursory internet search will leave you overwhelmed with mind blowing videos showing everything from robot bands, security guards, teachers, and waiters. While it’s safe to say many commercial bots currently available will probably fade away like many other gimmicks, the real test is whether the technology offers a real cost effective way to save time and money.
A good example is chat-bots. While they are annoying to deal with, in customer service they have the advantage of being available 24/7. When trying to find out basic information about a company or reschedule and appointment this can be a major advantage. Other innovations that seem likely to catch on are robot surgeons, as they can perform complex operations with much higher accuracy than a human. Other areas where robots will almost certainly take over are in areas that humans can’t go such as hazardous environments, or outer space. NASA has already developed the Valkyrie robot for navigating space, and there are many deep sea diving robots employed to explore the oceans. China even recently announced startling plans to create an underwater colony of robots for this exact purpose, so it’s a safe bet that they’re going to be a mainstay of deep sea exploration fairly soon.
Advancements in recognition and dexterity technology means that robots manufacturing more complex things like electronics, toys and furniture will likely become more common. Whether this will translate into other things like robot waiters and robot house cleaners depends on how convenient it is to have a giant robot arm wondering around your house that needs to be told exactly what to pick up and when, or whether (as seems likely) it’s probably just easier to do it yourself.
Learning to Learn
In contrast with the kind of narrow AI used in robotics, artificial general intelligence (AGI) is entirely different. The concept entered public imagination after IBM’s supercomputer Deep Blue beat the world’s best chess player Garry Kasparov at the game in 1997. IBM’s Watson would later go on to beat human contestants on the live TV quiz show Jeopardy. Even this was later eclipsed in 2016 when Google’s AI Alpha Go taught itself to play the incredibly complex Chinese game go, after thousands of iterations it was able to teach itself the game and then beat the world champion Lee Sedol in close to six months. Google then developed a better version of the same AI called Alpha zero which was able to beat Alpha Go after 3 days of training itself.
So what are the world’s most powerful super AIs doing at this moment? Why playing video games of course. Surprisingly video games are a very good way of training AI to behave in complex environments and adapt to new rules. Deep Mind and Elon Musk’s Open AI have been busy mastering most of SEGA’s arcade games as well as the popular strategy game DOTA2. AIs are also given building blocks in simulated environments and told to achieve natural movements. Deep Mind’s 2017 report: ‘Producing Flexible Behaviors in Simulated Environments’ details how AI can test out different ways of manipulating a pair of legs to move across the room. The hope is that this can allow robots to learn basic human or animal like movements in the real world.
This might look impressive but the distance between this and an actual thinking machine is still extremely large. Expert opinion should convince anyone however, that the prospect is not simply science fiction.
At the Joint Conference on Human Level Artificial Intelligence held in Prague, a poll found that 37% of respondents thought that human level AI was coming within the next 10 years, another 28% within the next 20 years. Just 2% thought that it would never exist. These responses came with many caveats from the respondents, and disagreements about what actually constitutes AGI.
Before the development of Alpha Go it was thought that any AI capable of playing go would constitute an AGI but this has been proven wrong. The turing test which was held up as the gold standard of machine learning has been passed many times by chat-bots, sometimes simply by deflecting questions via responding with further questions. Others have suggested a pyramid scheme of machine cognition that includes things such as intuition, curiosity, prediction, humour, and self-awareness. The truth is that there is no clear line. Some outlets have already recently reported that Deep Mind already constitutes an AGI after the recent development of something called transfer learning that allows it to learn from previous data sets rather than starting from scratch each time.
Think about the cognitive elements that are involved in say, buying a gift for a friend. A narrow AI, could easily examine data sets about what your friend might like, and compare them with data sets about gifts that people usually respond well to. However an AGI, would weigh up all the different opposing factors to consider and find the best outcome. This quality is what some experts think will constitute an AGI.
The Intelligence Race
A serious limitation when it comes to data training is processing power and computers are the crux that AI development depends on. The computational power that it takes to train an AI is astronomical, with even straightforward tasks like recognising a pencil requiring thousands of iterations. Demand for GPU processors at companies like NVidia has risen 83% year on year generating $760 million in revenue mainly driven by the demand for machine learning. Recent data from these companies has found that Moore’s law can only take the tech industry so far, and that the returns on increasing computational power are getting lower each year. Transistors in computer chips are almost the size of a single atom, so the scalability of microchips seems to have reached a dead end.
This is what has driven the rise in the last decade of ever increasing numbers of super-computers. Gigantic rooms the size of office spaces or car parks are filled with rows of glowing black boxes that sit underneath many of the world’s best universities, quietly humming away. Massive computer sites have been around since Turing, but the advancements in size, scale, and complexity of the projects are being driven by tactical competition between namely two countries: The United States and China.
Trade war aside, the US and China are currently in the midst of what seems to be an intelligence race, involving escalating computer power, AI and IQ. China has more supercomputers than any other country in the world, holding 229 spots on the Top 500 list. The next 3 contenders are the United States (108), Japan (31), and the UK (20). In 2016 China unveiled what was then the world’s most powerful supercomputer, the Sunway TaihuLight featuring more than ten million CPU cores, and capable of running at 105 petaflops (billion calculation per second). However this year the supercomputer top rankings for 1st and 2nd place were secured once again by American supercomputers both developed by IBM. The Sierra at UCL Berkley at number 2, and the Summit super computer in Oakridge Tennessee at number 1. With 2.5 Million GB and 200 petaflops, the supercomputer is about 50% more powerful than the Taihulight, and uses enough electricity to power a small town.
The Summit computer recently fulfilled predictions of being the world’s first ‘exa-scale computer’, meaning a computer that can perform at one ‘exaflop’ or a quadrillion calculations per second. The mega-processor completed 1.8 quintillion flops a second while analysing genomics code in June this year. “The results are pretty exciting.” Said the lead coder on the project, Wayne Joubert, “ In one hour on Summit, we can solve a problem that would take 30 years on a desktop computer.” The creation of an exascale computer has long been a stated goal of the Chinese government, which aims to develop one by 2020. The Chinese word for computer is diannao which literally translates to ‘electric brain’.
These mysterious super processors seem to be useful for more or less everything. Almost all areas of science now require immense mathematical computation. Climate models, health, genomics, mining, and space exploration all require massive amounts of calculations. Exascale computing was cited as necessary for the Human Brain Project which aims to completely map and isolate every neuron in the human brain. It’s been estimated that to effectively predict the earth’s climate we would need a computer capable of calculating one sextillion (1000 000 000 000 000 000 000) flops per second, or one ZettaFlop.
Now that exascale computers are in existence, the next step may be quantum computing. Using the peculiar mechanisms of quantum entanglement, a quantum bit can inhabit a superposition between two bits rather than the two bit binary code that computers use currently. With the factor of multiplication of power that this provides you can get an idea of what quantum computers might be capable of. They have already been developed by Google, and IBM, and resemble somewhat intriguing upside down metal wedding cakes. Google as with AI, is again the furthest along in this field, this year unveiling the most powerful quantum computing chip, the ‘bristlecone’ chip holding a record 72 qubits.
With all the fuss over quantum computing and exascale computers you’d think that horsepower was all that mattered but what the computers are used for is much more important than bragging rights over who can achieve the most petaflops. Specialised supercomputers exist for highly specific tasks. Iceland’s Thor data centre supercomputer runs on entirely renewable energy and is used for climate research. The Japanese MDGRAPE-3 is used to detect protein folds for medicine, and the Spinnaker supercomputer turned on this year in the UK is specifically created to map the neurons in the brain.
The elephant in the room is that these supercomputers are not only useful for benign things like climate and medical research, but also can be utilised for almost anything including tactical advantage in economic data, or in military strategy. The US government already employs a wide range of super computers specifically for military technology and communications, some of which rank in the top 20 most powerful. Russia in 2016 boasted its own massive military supercomputer which the defence minister actually said could help them “analyse and predict emerging threats”. A quantum computer would be invaluable in any military conflict because any communications could be cracked by a machine a billion times more powerful than the computer that cracked the German’s enigma code in World War 2.
Deep Thought
The three moving parts of the puzzle of the current tech industry, robotics, AI, and computers seem on the verge of being invigorated by quantum computing. We know that there are many areas where robots using AI will become more dominant in manufacturing, and environments that would normally be dangerous for humans to go (such as nuclear sites, space, warzones, fires, and the deep ocean). A recent essay at the Brookings Institute laid out how the next war between great powers would be dominated by AI even only using current technology. The nature of supercomputers means they will likely be dispersed across universities and government centres, taking on a variety of forms rather than a gigantic sentient being like Deep Thought from Hitchhiker’s Guide to the Galaxy. It seems likely that in the future almost every city and large company will have its own data centre to remain competitive.
One aspect of this type of futurism that is harder to deal with than such categories is the ultimate end game when it comes to the human enhancement of cognition using technology. The transhumanist movement has long held a subset of beliefs called multiplicity which holds that instead of AI replacing humans, they will simply develop alongside each other and integrate silicon with carbon based lifeforms to achieve a transhumanist future.
This is the actual aim of Elon Musk’s co-founded company Neuralink. The only company of its kind currently in existence that aims to actually enhance the human brain using micro-implants. If reports are to be believed, Neuralink has actually succeeded in implanting 3D transistors that can be injected inside a mouse’s skull, connect with its retina and record its eye movements. The use of 3D transistors and a mesh shape has overcome the immune response that critics like Steven Pinker have suggested would make any such technological implanting impossible. This is scary stuff, and one doesn’t need too much imagination to think of the implications this would have if it was used by humans.
If a programmable, computer based brain with access to a search engine became possible, then not only would everyone be part of a hive mind like consciousness, but it would also mean that human brains would instantly become vulnerable to cyberattacks. The technology is obviously now too primitive to start planning for such scenarios but think of what we could and would change about the human character if we were given the opportunity to reprogramme our brains.
In some ways this doesn’t have to seem too estranged. We can just think about the ways that we modify ourselves currently. We drink coffee to stay focused while at work, we drink alcohol when we want to be sociable, eat sugar when we want to feel good. Many people around the world routinely take medication to sleep on time, or meditate to better reflect on their ambitions. If such modifications were available the results may not be as intrusive as they at first seem, we already have giant online networks of people that we don’t talk to, and an infinite amount of facts. The potential and danger is how this could be integrated with the brain. Our own thoughts are dependent on our own memories and biological neuro-structure. If we have the chance to change all of that then the only constraining difference between different consciousnesses is the human body itself, which of course could also be replaced by a replica body that performs far better than ours.
Maybe we are destined to become part of a great hive mind that will eventually simply saturate across the universe absorbing inanimate matter and think about how it can inhabit new dimensions. Or maybe this implantation business will go the way of cloning technology, humanity may simply shrug its shoulders and decide that its fine keeping the computers at an arm’s length. Even if brain imaging technology improves to the point where we can upload our memories onto computers and can 3D-print perfect replicas of ourselves we may simply decide to carry on as we are, doing perfectly human activities with a few convenient enhancements here and there, a digital backup maybe, but still spending our time in the bodies we evolved in, and living our lives as enhanced apes. Even if we stand at risk of an asteroid or a neutron star explosion we can simply carry on being humans as far across the galaxy as we can. There should not really be anything inevitable about a transhumanist future. After-all what is the point of new technology if it can’t help us enjoy being human?
As a child I may have wished for technology to full-fill the promises that had been made to me by so many science fiction books, but as an adult I may reflect that I should have been content with better video games.