CodeBork | Tales from the Codeface

The coding blog of Alastair Smith, a software developer based in Cambridge, UK. Interested in DevOps, Azure, Kubernetes, .NET Core, and VueJS.


Project maintained by Hosted on GitHub Pages — Theme by mattgraham

Quantum Computing (basing computer processing and storage on subatomic particles, rather than discrete voltages) has long been hailed the next big step in computer science. Exploiting the “weirdness” of quantum mechanics offers huge steps forward in parallel processing and cryptography, but progress has been slow. After all, it’s not easy pinning down a single photon, electron, or other particle, and getting it to obey your every command. To be honest, it’s a bit like herding cats.

Enter Chaotic Computing. As featured in the current issue of the New Scientist and available online here (subscription required), this relatively new field promises similar progress to Quantum Computing with a fraction of the overhead; chaotic circuits can be easily built with today’s technology.

[img_assist|nid=50|title=|desc=|link=node|align=center|width=450|height=300] Generally speaking, chaos in computing is not desirable. We computer scientists like our creations to be deterministic, and have come to expect them to behave that way. After all, the CPU at the heart of all computers is a deterministic beast: put in 2, another 2, and instruct it to add the two numbers together and you will always get 4. There are, of course, occasions when determinism breaks down, or is simply impossible to achieve (such as in concurrent programming), but even this rates way above chaos. Electrical Engineers and the like may learn how to create chaotic circuits, but this is more as a guard against their creation by teaching students the characteristics of such circuits so they know to avoid them.

So it’s all a bit of a shock to learn that chaos can be “tamed” to provide a useful function. What is chaos, though? Is it a purely anarchical system with no logic nor order? Is it a completely random system? In a word, no. For this is the chaos underpinning the Butterfly Effect, the idea that a tiny seed occurrence can have enormous consequences in a system with a heightened sensitivity to that occurrence: classically, a butterfly flapping its wings in Brazil could set off a tornado in Texas. (Not-very-interesting side note: there are literally hundreds of variations on this. Check out some of them here.) There is very little logic, and certainly no way of predicting the output of anything but the most simple chaotic systems given any input, but there are some underlying properties that define them:

  1. it must be sensitive to initial conditions
  2. it must evolve over time so that any given region of the system will eventually overlap with any other region

Paraphrased from Wikipedia’s definition.

There is a third property that I didn’t fully understand (follow the link for more information), but I’m hoping it doesn’t dent my understanding too much, and the two above points seem to encapsulate the idea of a chaotic system quite nicely. For example, think about adding milk to a cup of tea or coffee: your initial condition is a cup of black tea, which is a stable state until the milk is added. It’s stable in that it’s not going to spontaneously turn green, but due to the Brownian motion of the water molecules, it’s sensitive to other fluids being added to it. You add the milk, and it “billows” its way down and then back up the tea causing regions of the black tea to become white and overlap with other regions of the black tea causing them to be white… And so on until the diffusion has completed and you have a cup of white tea. Ever wondered why the phrase “storm in a teacup” works so well? That’s exactly how a planet’s weather systems work.

Anyway, enough of the theory. How does this relate to computing? The basic idea is that although chaotic circuits are apparently random, there is an underlying set of voltages that is cycling in a predictable manner. Think about the piercing feedback whine you get at a rock gig; this is a chaotic system, and it’s generally not a nice one (unless it’s well-controlled and you’re a Hendrix fan). However, it may start as a buzz in the amplifier caused by a nearby electrical source, such as a guitar, which is running on a particular voltage; the amp is also running on a particular voltage. Cycling between the two causes the feedback loop and the loud whine.

These predictable voltages can be used as the basis of a logic gate, as is the case with current technology. The difference, however, is in the use of control lines. Feed a pair of voltages into a logic gate’s inputs, and you will always get the result according to the gate’s function (there’s that determinism thing again). However, feeding the same inputs into a “chaogate” returns different results according to the value of the control lines: you might get an AND, a NOR or a NOT, for example.

That’s the first sucker-punch for determinism. Who needs separate logic gates for AND, OR, NOT, etc., when I can build one chaogate and change the output from the gate on the fly? Build a CPU out of these things, and it could easily also function as a GPU, just by flicking a hardware or software switch.

The second comes as a result of the first: a self-repairing circuit. When one gate fails (maybe the chaos circuit broke and threw itself into array) another can be re-tasked to do its job. Maybe the first gate only stopped working for AND, but will continue working fine for NOT and OR; it still continue its job, but the AND function hasn’t been lost from the system.

More interesting still is the third outcome of this technique: Content-Addressable Memory (CAM). Whilst normal memory works on the basis of addressed locations, CAM circuits allow you to locate an item of data in memory simply by knowing its value. For example, if I load data into CAM, I can find all instances of a datum by simply feeding it into the retrieval circuit. As you can probably imagine, this has huge ramifications for searching and indexing, so it’s no surprise to learn that “a commercial customer” (anyone else thinking Google here?) has commissioned such a system for internal testing.

The problem with traditional CAM is the extra circuitry required to implement it, and the associated costs in power and heat (as well cash money). However, because chaogates have a large number of possible stable states, it is much easier to store large amounts of information. For example, the absolute minimum number of bits required to represent every letter in the standard English alphabet is 5 (25 = 32, so even then there are a few spaces going). Using chaogates, this would consume one Non-linear Bit (or “nit”), with plenty left over. One of the researchers has stored the entirety of Lincoln’s Gettysburg Address (1452 characters, or 11616 bits/1.4KB) in just 264 nits, 1/44th of the storage requirement. Part of this saving is achieved through the system’s ability to use one nit to represent an entire word, and so low-level memory searches can be conducted on actual words, using a simple AND on the voltage representing that word.

There are issues with the technology, however. For example, chaotic circuits are vulnerable to interference by electronic noise (think back to the feedback at the rock concert), and so it may be that they require shielding. The switching circuits to alter the chaogates’ behaviour will be tricky and therefore expensive to develop, and it’s apparently proving difficult to sell the concept to existing chip manufacturers.

There’s no doubt however, that this technology has the potential to provide some impressive improvements in searching, indexing, storage efficiency and processor sizes, not to mention the idea of self-repairing circuits. It may also prove to be an enabling technology for the more exotic approaches like quantum computing. I for one am looking forward to the first chaotic search engine, and the day that my computer has inside it a chaos chip. You can do your own jokes around that sentence.


For more information, check out the New Scientist article linked above, or William Ditto’s older article at http://www.fortunecity.com/emachines/e11/86/mastring.html (contains more technical detail).