The New Homunculus - Artificial Intelligence and Ancient Magic

 
 

As mages we are known to stand at crossroads. That's where we perform our best work - at that liminal place where two worlds, two forces, two beings intersect. In this post we will be exploring such a crossroad: the one where magic and modern artificial intelligence research intersect. 

As I assume most readers of this blog are familiar with the foundational techniques of magic, I'll be taking less time to layout these and instead provide a brief and obviously naive overview on the emerging science of coded neural networks. This initial sketch obviously is not meant for people who study this field - but for the rest of us. The following section shines a light on the intersection of this emerging science and traditional magic. 

Only recently both crafts haven not only turned into equal parts of science and art, but also are beginning to confront the same dangers and risks. That's why we will be closing with reflections on the techniques and approaches modern AI scientists might want to consider stealing from the magician's armoury.

LVX,
Frater Acher
May the serpent bite its tail. 


(1) Algorithms instead of Evocations

From the beginning, the primary interest in nuclear technology was the inexhaustible supply of energy. The possibility of weapons was also obvious. I think there is a reasonable analogy between unlimited amounts of energy and unlimited amounts of intelligence. Both seem wonderful until one thinks of the possible risks. In neither case will anyone regulate the mathematics. The regulation of nuclear weapons deals with objects and materials, whereas with AI it will be a bewildering variety of software that we cannot yet describe. I’m not aware of any large movement calling for regulation either inside or outside AI, because we don’t know how to write such regulation.
— Professor Stuart Russell in an interview with Science (Fears of an AI pioneer, 2015)

When we speak of artificial intelligence today we really have to consider two quite different scenarios. In the first we encounter physical objects animated by artificial intelligence, i.e. the science of robotics. In the second scenario we do not encounter anything, except for a bright screen and something - from behind it or from somewhere - providing an answer to a question we asked. 

It is widely agreed that the latter scenario currently is the bottleneck to revolutionise the former. Intelligent software is the necessary foundation for intelligent hardware to come to life. The main software challenge today, on the other hand, can be broadly described as a problem of mimicking the human brain's learning mechanisms in the world of machine learning. Learning in this case is not only a mode of acquiring more knowledge or skills - but also a mode of changing oneself through the things we learn and experience. When we talk about this kind of neural learning, we talk about nothing less but self-engineered evolution. To get there scientists aim to write algorithms that leverage an autonomous learning environment to optimise themselves.

One part of that equation has already been solved. The actual digital learning environment has come into existence long ago. We created it over the last three decades and called it 'big data': billions of digital books and papers from all ages and languages available at any time - thrown into a vast ocean of personal, corporate and governmental user data, statistics of all aspects of our global economies and geographies as well as scientific life tracking data from deep sea oil mining to deep space observatories. When we say 'everything is online' it's good to remind ourselves that there is a digital world outside of Facebook and Netflix.  

So we created the playground. Now the question is how long will it take us to create the curious artificial mind - the coded homunculus.

In the magical paradigm numbers hold incredible powers and if translated into letters turn into the very fabric of life. From it practitioners over the past have continued to draw sparks, wrapped into boundaries of an individual being. Whenever they succeeded in doing so they either created a homunculus, if such being operated without a physical body - or a golem, if it was immersed and bound into clay. In modern AI research the very same premises apply, only the paradigm has shifted. While numbers still make up the raw material, aether has turned into clouds of data and clay into silicon and metal.

To understand how real this parallel is, one has to understand that the most recent generation of AI software programs operate in a very different way from what most of us grew up with. Historically software was as smart as the information someone had coded into it. Take a computer game for example: the boundaries of the world you can explore are the boundaries of what humans programmed into it. Within this definite amount of code new combinations are possible, e.g. the construction of new levels. But the actual raw material the software operated with always remained a know quantity. It was once limited to a floppy disk - and today to the roughly 2GB we download on our Playstation or XBOX.

The more recent kind of AI software works in a very different way. It is unconditionally open, without definite boundaries, it blends and merges with the vast ocean of digital data we created. It is centred on its ability to learn, to absorb the new - not on performing a repeatable task. Most importantly it operates its essential functions away from the eyes or hands of its creators. Engineers don't even know anymore how the software they once generated, now generates the answers they begin to see. -- It's the literal jinn outside its bottle.

Let's take the example of an AI written to learn language: a piece of regulatory core-code is written that performs a compound of interdependent tasks, in this case orientated towards learning how to read a particular language. Yet the code is not written as a closed system. Instead it contains a minimal set of core features only; from there algorithms are used to allow the program to expand itself through training. Then it is exposed to training material in increasing levels of difficulty - the more the better - and through a process of trial and error it begins to auto-enhance its own code, a process called reinforcement learning.

What we’ve done is build algorithms which learn from the ground up, so you give them perceptual experience and they learn how to do thinks directly. The idea is that these types of systems are more human like in the way they learn because that is how humans learn, by learning from, the world around us, using our senses, to allow us to make decisions and plans.
— Demis Hassabis of Google's DeepMind

So in the case of a software teaching itself to read a certain language, it has to understand the grammar, rules, exception, full vocabulary and dialects of the language in question, i.e. its semiotics. It also faces the issues that people don't talk like machines and therefore it has to be able to deal with poems, indirect references, puns and humour. One of many examples for this particular challenge is the deep learning system that MetaMind is working on. You could feed it the following paragraph:

Jane went to the hallway.
Mary walked to the bathroom.
Sandra went to the garden.
Daniel went back to the garden.
Sandra took the milk there.

And if asked 'Where is the milk?', it would respond: 'In the garden'

So far so impressive. But knowing where the milk is doesn't seem like a Goetic demon just yet, right? 

Well, now consider this: Once this source-code is stable and well-rounded enough to perform a decent job in one language, the actual journey begins. Now you open it up to 'big data' and write another piece of code. In this you tell it that there are many more linguistic systems (languages) out there and all of them work according to similar rules just like the ones it has mastered through training already. You tell it to use the language it already knows as a blueprint - from which to teach itself other languages autonomously. Then you release it into the sea. And so it disappears.

If your homunculus has turned out well, after a while you can feed it input in any language you choose - and it will be able to respond, translate and re-translate into any language that it had sufficient exposure to. How it acquired these skills exactly, nobody will ever know. 

It's a bit like releasing a child into the world. You have given it a certain amount of core skills, knowledge and insights - but once it disappears out into the world it is in no way limited to its origins. It evolves. It encounters the unexpected and changes. It learns and grows - away from the eyes, the sight and possible interference of its parents. It becomes its own co-creator.

Here are a few examples of such autonomous deep-learning homunculi which we now like to call neural networks:

  • in 2015 a neural-network autonomously deciphered a century old book written in code that had been impossible to read until then. It was able to do this because it knew how to read one language, was given access to all other languages in the cloud, and presented with the problem to translate this 'semiotic code', i.e. the coded book into English. How it actually did it - nobody knows.
  • in the late 1990 Juergen Schmidhuber developed an 'autonomous agent' optimised for curiosity and creativity. It was tasked to predict future events based upon patterns of historic events. It was also equipped with a learning reinforcement system that provided 'curiosity-rewards' to the software whenever it discovered unknown regularities that improved its predictor - thus minimising the time it spent on boring or obvious future events.
  • the same Mr.Schmidhuber and his team won a global competition in 2011. The challenge had been to write a neural network that could 'read' road signs presented to it on an everyday photo which also contained other objects as well as background and foreground. Schmidhuber's machine not only outperformed any other deep-learning software written for that purpose, but even human experts who participated in the study as a benchmark. Here is what Mr.Schmidhuber had to say about their secret of success:
AA: What’s your team’s secret?

JS: Remarkably, we do not need the traditional sophisticated computer vision techniques developed over the past six decades or so. Instead, our deep, biologically rather plausible artificial neural networks are inspired by human brains, and they learn to recognize objects from numerous training examples.

It shouldn’t surprise us to learn that amongst many other awards between 2009 and 2012 the same team won several 'Handwriting Competitions' (i.e. competitions where neural networks need to read and interpret actual human handwriting) in Chinese, Arabic and Farsi without anybody on the team speaking a single word in these languages. 

The team has now gone back to Schmidhuber's research of the 1990s and are combining the passive neural networks they created most recently with his research on fun, creativity and curiosity for machine learning. Their goal is to create neural networks that actively go out and seek out new information, evolve themselves based upon the information they discover and push into even more independent learning modes.

The ultimate goal is to create something more intelligent than ourselves. Whoever achieves this goal can happily retire and watch his creation colonise the universe.
— J.Schmidhuber, 5.7.2016, WDR5 Neugier genuegt

Ultimately what we are faced with is a reality that is at the brink of creating an actual real-life Gödel machine: autonomous software programs that are sufficiently self-reflective to adopt to their environment in an arbitrary fashion (i.e. rewrite their own code) once they find proof that such evolution is useful to fulfil their user-defined function. 

So this is the army of jinn we are about to unleash from Solomon's secret chest: a generation of independent machines that self-engineer their own evolution hidden from the eyes of their creators - and inaccessible to their hands for intervention - with full access to the (almost) complete historic and real-life data the human race managed to assemble. From here it seems a minor step for such machines to also re-write their actual function of origin, their reason for being - if they see value in evolving it in a way that is more adapt to the reality they continue to discover on their own terms. After all Solomon might simply not have known any better. And in the pursuit of building his majestic temple, why not lend him a digital hand?   

Anthropomorphic ideas of a “robot rebellion,” in which AIs spontaneously develop primate-like resentments of low tribal status, are the stuff of science fiction. The more plausible danger stems not from malice, but from the fact that human survival requires scarce resources: resources for which AIs may have other uses. Superintelligent AIs with real-world traction, such as access to pervasive data networks and autonomous robotics, could radically alter their environment, e.g., by harnessing all available solar, chemical, and nuclear energy. If such AIs found uses for free energy that better furthered their goals than supporting human life, human survival would become unlikely.
— Machine Intelligence Research Institute | https://intelligence.org/summary

(2) Old dangers in new dressings

Now, let's rethink what we have heard here. And let's reflect on it through the lens of our experience as mages. What of the above evokes memories of our own tradition? Think back to the original dichotomy of how artificial intelligence is encountered - bound into a physical shell or not at all, yet only through the traces it leaves behind. 

Isn't this similar to the most fundamental misperception the public holds about the work of the mage? Over and over again do people expect demons, whether chthonic or celestial, to take a form visible to the human eye, to become an experiential reality to the limited spectrum of the physical human senses. Of course such scenarios are possible in principle - always have been historically and they continue to happen: Some people indeed did see a golem with their physical eyes, fewer have raised the physically dead and even less have been teleported by jinns. And yet, the pure fact that it can happen has created the essential public misperception that such encounters would be the 'natural' way for spiritual beings to engage with humans. Where indeed the exact opposite is true.

Spiritual beings in 99.9% of all cases live in a realm that is non-sensual to man. If the 'program' we call the world operates smoothly we cannot hear, smell, taste, feel or see them. But they are around us all the time. It doesn't matter if we remind ourselves of their presence or not. No more than it matters to a virus or a molecule if our human brain happens to think of them. The only way to realise their presence is through the impact they make on our lives. 

Just consider the following statement about the most recent generation of neural networks. Wouldn't this statement be just as true for any practicing mage referring to his relationship with demonic beings:

It’s almost like being the coach rather than the player,” says Demis Hassabis, co-founder of DeepMind, “You’re coaxing these things, rather than directly telling them what to do.

Most of us are caught in the same misperception about the nature of spiritual beings as about the nature of artificial intelligence. We imagine the dangers that stem from it through images we borrow from news-streams, Hollywood movies or fiction novels: we imagine military robot armies, remotely operated armed drones or humanoid household robots. And yes, all of these could become a reality or have intruded into our present day already. What we fail to see, though, is that the true realm of artificial intelligence lies withdrawn from our human senses. We do not get see it - neither through algorithms nor microscopes. 

We are living in an iconic age. If news don't fit into a single image, it will never get picked up by social media. And if there are no images at all, no reporting whatsoever will happen. We have created a visual culture so dominant that decades ago most of our other senses began to degenerate and started to wither away. Just as it has become impossible for most people to understand the true nature of magic, so are we challenged to understand the actual dangers and risks of artificial intelligence. Because man is judging an essentially alien species according to the limited habitat he chose to live in himself. 

As mages we hold a competitive edge though. Much of our training and work is about getting to know 'foreign species' and learning how to deal with them. In the process we learn just as much about their habitat as about our own, we build new skills, change the way we look at the world and extend our human senses back into realms we almost had forgotten about.

Here is just one practical example of such a competitive edge - a place where our experience as mages might be relevant to better navigate the dangers of modern AI coding: Chaos magicians discovered decades ago that working with artificial intelligence has strong similarities to working with sigil magic. Both a sigil as well as the source-code of an AI are designed to fulfil a very specific function or purpose. That 'code' is then released into its natural habitat - in the case of AI into mining and learning from big-data patterns, in the case of sigils into the collective unconscious. 

Once released both pieces of code begin to operate autonomously from the creator's realm of influence. The results they produce often come as a surprise - as the original intend or function was open to interpretation the mage or programmer had never considered.

Take a look at the following AI scenarios and compare them to your own experiences of sigils that back-fired. They were written by the 'Future of Life Institute' on AIs that develop a destructive method to achieve originally well-intended goals:

This can happen whenever we fail to fully align the AI’s goals with ours, which is strikingly difficult. If you ask an obedient intelligent car to take you to the airport as fast as possible, it might get you there chased by helicopters and covered in vomit, doing not what you wanted but literally what you asked for. If a superintelligent system is tasked with a ambitious geo-engineering project, it might wreak havoc with our ecosystem as a side effect, and view human attempts to stop it as a threat to be met.
— futureoflife.org

So if decades, if not centuries of magical training and practice can be relevant to modern AI scientists, what advise would we offer? Here are the three things I would suggest to start with. They are time-tested practices of many kyilkhor, golem and sigil-magicians: 

  1. Speak to your code. From writing its first line consider your evolving AI a consciousness in its own right. Don't treat it like a fully evolved being, but as substance that attracts particular forces of life. And once these are sufficiently gathered and amalgamated, will kick into its own spark of consciousness. So seek to contact the forces its emerging pattern resonates with and speak to those.
  2. Use divination. From dream-incubation to tarot or the Quareia deck - leverage the power of proper divination to learn more about the forces your work will be attracting and unfolding. Use layout patterns that tell you about the forces you'll be contacting on the inner realm, as well as layouts that tell you about the long-term impact of your code in the outer realm.
  3. Work in service. Dedicate your professional work to a specific ancient god or goddess and allow her/him to take control of the impact it will be generating. Your AI might still turn out to unfold destructive effects - especially if you chose an imbalanced divine being. However, it will now work in service of a force that is meant to shape the makeup of our planet - rather than being thrown blindly into the delicate weaving of divine forces.
  4. Learn a supplementary craft. If you truly want to take responsibility for the impact of your work, dig deeper. Don't stop at the surface, but walk the hard way and develop a set of magical skills that will balance and counter your coding skills. Doing the Quareia course - step by step, with a long breath - is the best training you'll ever find for this purpose. Because its methods and techniques are stripped bare of any traditional overlay or historic fabrics. It leads you right into the heart of the matter, towards seeing and dealing with the weave of power, the backend of creation. The same backend your AIs will be working on as well - only leveraging zeros and ones instead of will and imagination.

For many years I often calmed myself remembering this wonderful quote by William Gray. When I had messed up again badly or felt wrenched by fears of failure, I recalled these words:

We do not have enough power at our own disposal to do anything very wonderful.
— William Gray, Inner Traditions of Magic

Since then I guess I learned better. Man indeed might not have the power to do anything truly wonderful or magical. But maybe his creations will?