Virtual Chemistry

If you’re interested, Virtual Chemistry can now be played with or downloaded here.

A little while ago, I was, once again, idly messing around in NetLogo, having fun with the layout-spring method, which basically treats agents like the nodes of a network of springs, and the links between them as the springs themselves, then computes the dynamics and changes the positions of hte nodes accordingly. Eventually, inspiration struck, and I wondered if it would be possible to build a simulation of chemical bonding. I spent about an hour writing and revising the code, and thus, Virtual Chemistry was born.

It works like this: at the beginning of each simulation run, a random number is generated. This is the number of unique elements that will appear. Arrays are created that store the properties of these elements, namely: complements (which tells the element which other elements it can form bonds with), and maxbonds (which, obviously, tells the element the maximum number of bonds that it can form). Every step, an atom checks for other atoms within the radius “interact-radius” (which the user can set using a slider). If any of the nearby atoms are in the atom’s complement list, there is a ten percent chance that it will form a bond, as long as it hasn’t reached the maximum number of bonds. Atoms also wiggle around randomly to simulate temperature effects.

It took me a while to get this to work and to get rid of a few irritating bugs, but eventually, I got it working, and, not to be immodest, but I was impressed with the kinds of behavior it could produce.

As I watched the simulations unfold, I observed behavior that I hadn’t even considered when I was building the simulation. For example, when the simulated space was packed to a sufficient density with atoms, it began to experience real pressure effects. The way the physics works, bonds hold atoms together, but atoms repel each other. At the start, with the atoms at a low density (and a low enough temperature), certain “molecules” formed. As I added atoms (effectively increasing the pressure), some structures that had been unstable became stable, and some of the structures that had been stable destabilized. I was fairly happy, because I realized that what I was seeing were actual phase changes in my virtual material. Similar phase changes occurred when I changed the temperature, too.

One experimental run in particular is very illustrative. I call it Triad World. In this world, there is only one element (“Element 0”), and each atom of Element 0 can have at most three bonds. Even in a simple world like this, I observed some interesting things.

Here, you can see the atoms, and the three-atom triangles that give Triad World its name. As you can see, these dominate, but there are also other “molecules” which occur quite frequently. The scene above arose at a low temperature and a fairly low pressure.

After adding a few atoms (and thus increasing the pressure), new stable structures emerged. Note the large chain near the center of the image. Only the restraining force exerted by the surrounding atoms prevents the heat from tearing this chain apart. The same force, though, also prevents the two-atom dimers from forming triad,s which they normally would have done.

There are a lot of potential applications for a program like this (more so, perhaps, than my zombie infection simulator), and it’s certainly a lot of fun to play with. The problem is, running the simulation with more than about two hundred atoms slows it down pretty badly. But in order to simulate anything as complicated as, say, a rudimentary biomolecule, thousands of atoms or more would be required. With that in mind, here’s my to-do list of improvements:

  • Optimize the code, if possible, to make it run faster.
  • Write a better bond-forming routine. I’m not happy with the one that’s currently implemented.
  • Tune the attraction and repulsion strengths. Right now, I get the vague feeling that the weird interatomic forces that exist in the simulation are preventing some interesting structures from forming. I know for a fact that they’re preventing any but the most rudimentary chemical reactions.
  • Implement some kind of energy system. Right now, the only energy comes in the form of the random motions induced by the heat. I’d like to make it so that forming bonds consumes energy and breaking bonds releases it.

That’s all for now. I’m hoping to have the complete program up on the NetLogo website soon, and when it’s up, I’ll publish the link.

SimHeart 3.0

Some months ago, I wrote a series of posts about the behavior of my NetLogo heart simulation (namely, this one and this other one). SimHeart has been on the backburner since then, partly because I was making some effort (some) to pay attention to my classes, but mostly because I didn’t have any new ideas to implement. Then, a couple of weeks ago, I was messing around in NetLogo, trying to figure out how to get the cells of the grid to obey a switching-on rule similar to the kind of rules you find in neural networks. That, as it turned out, was a huge bust, but I re-used some of the code to build SimHeart 3.0.

This is a pretty radical revision, but, behaviorally, it’s pretty much the same as the old simulator. Still, there are some changes:

  • The cells, not the agents, do most of the work. Whether they fire or not is determined by a stochastic (random) sigmoid function. If a lot of a particular cell’s neighbors have high potential (the primary variable; actually, three varaibles, one for each system, but I’ll get to that later) and the cell’s potential is low enough, then the cell has a very high probability of switching its potential to 1. If not, the potential steadily drops.
  • The AV node is now fully simulated. I mentioned in my previous posts that I wanted to do this, and since the atria and the ventricles are simulated as two different kinds of potential, all I had to do to simulate the AV node was to introduce a third variant of the potential variable. Now, a heartbeat starts at the SA node (meaning a particular cell’s potential is set to 1), the wave travels until it reaches the entry cell for the AV node, triggers another wave (one that’s morphologically different, which I’ll talk about in the next bullet point) in the AV node, which travels to the exit cell, triggering the ventricular beat. This solution has the nice feature of taking care of the AV node’s natural delay for me, as well as realistically limiting the heart rate that the AV node can transmit from the atria to the ventricles. This is, by far, the most important addition.
  • The model now runs faster. Not much faster, but every bit counts, and given how much simpler the code is now, the model is now much more compact and elegant.
  • The model is more realistic. The stochastic potential-based cells are a lot more like how a real heart works than was the old model.
  • The model now takes into account the different sizes of the heart’s components. Because of the way the model works, the longer a cell’s potential takes to decay, the smaller of the particular component being simulated. Thus, the potential in the AV node takes a very long time to decay, making nodal arrhythmias essentially impossible (and making the AV node behave as though it’s very small); the potential in the atria takes a moderate amount of time to decay, making atrial arrhythmias much less common (and making the atria behave as though they’re larger than the AV node, but smaller than the ventricles. I think you see the pattern here.), and the potential in the ventricles decays quickly, making them fairly arrhythmia-prone.
  • The arrhythmias have changed. This isn’t necessarily a good thing, but it’s not crippling, either. The model now produces ventricular tachycardia much more readily than it did before, but the incidence of ventricular fibrillation has been reduced proportionately, and can be hard to differentiate between tachycardia. Unless the parameters are adjusted, atrial arrhythmais don’t occur at all. Still, the model can now incorporate junctional tachycardia (where a spiralling, self-sustaining wave in the AV node stimulates the ventricles to beat too quickly) can now be simulated easily.

And, not mentioned on the list above, the model is now prettier, too, since I was apparently experiencing some aesthetic inspiration when I was writing it. So, in SimHeart tradition (can I call it a tradition after three posts?), I present: the screenshots (note: ignore the weird screwed-up areas. They’re just explanatory text that apparently didn’t transfer into the screenshot. Don’t worry, you’re not missing anything, it’s all explained in the text below):

Periodically (with the period determined by the “s-rate” slider next to the view), the cell labeled “SA” sets its potential to 1, generating the atrial pulse (the red wave). If, for some reason (for example, if the atria are in fibrillation and there’s already a wave passing through the SA node), no wave will be generated.

When the pulse passing through the atria hits the cell labeled “AV1,” if there’s not a wave passing through it at the moment, it triggers a pulse in the AV node (the green wave).

If all goes well, the AV pulse triggers the cell labeled “AV2” which activates the ventricular pulse (the blue wave).

Of course, all does not always go well:

In this case, something has gone wrong with the electrical wave as it moves through the ventricles, causing a deadly spiral wave to form. This is simulated ventricular tachycardia, which will (probably) eventually decay into ventricular fibrillation. As you can see on the ECG, the atrial pulses are still trying to get through, but the AV node can’t activate the ventricles. Although it’s not pictured here, a quick press of the “Defibrillate” button took care of the arrhythmia.

Since the model’s now so much simpler, it’s easier to simulate diseases and treatments. Note the row of blue buttons on the right side of the images. There is, of course, the defibrillate button, which is remarkably effective at terminating arrhythmias. Below that are the “Increase Adrenalin” and “Increase Antiarrhythmics” buttons, the first of which makes the heart depolarize faster, and the second of which makes it depolarize more slowly. The first, as you might expect, makes the heart more arrhythmia-prone, and the second, obviously, makes it less arrhythmia-prone.

The set of buttons below that simulate various cardiac illnesses. “AV Block” makes the AV node’s cells depolarize very slowly, meaning that, when the sinus rate is set to a very high value, not all of the beats are able to get through. This, if I’m not mistaken, represents 2nd-degree AV block, type 1. The button below that, “Sinus Tachycardia”, sets the sinus rate to a very high value, simulating either a disease process or the effect of very strenuous exercise or other stress on the heart. “Long QT Syndrome” is a sort of bastardized version of the real disease, but has the same effect, making the ventricles dangerously more prone to arrhythmia.

All in all, I’m far happier with the new version than I was with either of the old versions. There are still some improvements to be made, however, and I’ll post updates as needed. Soon, I hope to have the model up on the NetLogo website, so that everybody can fiddle around with it.

A Labor of Love

A few days ago, I was mucking about in the vast swampland that the Internet has become, and I stumbled upon yet another reference to a programming language I’d heard about a few times before: “Processing.” My interest piqued, I went in search of a compiler, and found one at Processing’s website (, and immediately started learning the language.

Given my many previous failures trying to learn Java, upon which Processing is based, I didn’t think I’d have much chance of learning the language, but I tried anyway, and actually found it just about as intuitive and elegant as Python, which remains my favorite programming language of all times. For a while, I cobbled together various tiny programs to do things like graph functions in one and two variables, graph parametric equations by replacing x, y, and z with rgb color values (producing a rather strange-looking wave of color that wasn’t nearly as interesting as I’d hoped), and visualizing one-dimensional celluar automata (which, by the way, was a complete failure, because Python is the only language I’ve found whose array-handling I can both tolerate and understand). Then, since I’m always a mathematician at heart, I thought I’d do something that mathematicians love to do: visualize the primes.

Before I go on, I should re-iterate just how much of a godsend Processing is. I’ve been trying to write methods in Python to visualize various kinds of functions and data for many moons, and my results have never been much more than mediocre. The only graphics module I’ve learned in Python (Tkinter, in case you were interested) is clumsy and runs slowly, and really isn’t meant to handle the kind of pixel-by-pixel manipulations I’d had in mind. Processing, though, exists solely for this purpose, which is the reason for my gushing for the last three paragraphs.

Anyway, the primes. I put together a simple program that computes the gap between the current prime number and the last prime number (using the standard Sieve of Eratosthenes method), and draws a circle at the location of the current prime whose size is based on the gap between the two primes.

I suspect I could have saved these thousand words by doing as the cliché says, and just giving you the picture:

Prime Gaps

(You can see a higher-resolution version here).

There’s a great deal of hidden beauty in this picture, most of which I can’t claim responsibility for. There’s a certain order to it, even though the primes seem to be quite random. Really, the beauty comes from the delicate, elegant structure of mathematics. The structure of the primes, as the structure of pi, is an expression of the deep structure of numbers, and thus, of the deep structure of the universe itself. It can be an almost religious experience, a sort of holy communion with the Numbers, to be given a glimpse of that structure.

I don’t know why I’ve been so sentimental lately…either way, the point I’m driving at is this: visualization is a really powerful tool for understanding mathematics. And Processing is a great programming language for visualization. (And, once again, I sound like I’m on somebody’s payroll, but I’m not. As far as you know.).

SimHeart Update

The folks at the NetLogo website have been gracious enough to include SimHeart in their “community models” page, and the result is that there is now a place where you can run the program in your web browser (assuming you have a recent enough version of Java installed). Now, you don’t have to download or install anything in order to run it.

You can find the SimHeart applet here.

Once again, many, many thanks to the creators of NetLogo.

SimHeart — Now Available for Download

All right, as promised, I’ve finally figured out a way that people can download SimHeart to play with it themselves. Many thanks to the folks at NetLogo for automating so much of the process, and thanks to for the free file hosting.

The file is kind of large because, in order for it to work, I had to put a bunch of Java modules into the folder with it, but it shouldn’t take too long to download, even over a slow-ish Internet connection. When you’ve downloaded it, you’ll need to extract the file to your desktop. I recommend an unzipping program like WinZip or WinAce. The program should (major, major emphasis on should) work on Macs and PCs, but I make no guarantees.

To run the simulation, go into the folder into which you’ve extracted SimHeart, and double click on the HTML file there. It should open up in a new window, and you should see the simulation screen. If you don’t, either you don’t have an up-to-date version of Java, or something went wrong in the download process, or I made a mistake zipping the files. If you checked the previous two things, please leave a comment and describe the problem, and I’ll try to help, although I make no claims to be very good at this kind of thing.

Also, I must provide the obligatory legal disclaimer: I take no responsibility if this file somehow damages your system. To my knowledge, there is absolutely nothing in the file that should do so, but you never know, something might have gotten corrupted or damaged along the way. Also, this software is for entertainment purposes only, and should not be taken as any form of medical advice. I’m not sure why anybody would, but you never know.

Download SimHeart 2.0 here.

If you already have the latest version of NetLogo installed on your computer, you can download the muchhere. If you’re interested in this kind of thing, you should go ahead and download NetLogo (you can do that here). Not only will it allow you to download a much smaller file, but NetLogo comes with a whole cornucopia of fascinating little simulations, and there are more you can download from the Internet. smaller .nlogo file

Okay, apparently, that site decided to get rid of the file, so if you want to have a look at SimHeart, you can find it here, on the NetLogo community models page.

If you have trouble with either of these files, please let me know by commenting on this post. If you don’t want to do that for some reason, send an e-mail to asymptote [døt] inverse [át] gmail [døt] com (Sorry about all the weird characters in there, but that account gets enough spam as it is, without ever having broadcast the address on the Internet, so I figured I’d better obfuscate as much as possible).

I’ll try to update the files as I revise SimHeart, but I seem to be at a point where there’s not much more I can do with it, at least not without rewriting most of the code. I’ll be sure to post updates as they come.

SimHeart 2.0

It seems that every time I sit down to work on my heart-simulation project, I get a lot more done than I was expecting. In my last post on the subject, I talked about how I wanted to integrate a more realistic model of the atrioventricular (AV) node, the little bundle of nerve fibers that carries the contraction impulse from the atria at the top of the heart to the ventricles on the bottom. Apparently, I’d entirely misjudged the difficulty of this effort, since, once the solution occurred to me, I was able to implement it in about five minutes.

Here’s what I did. As I said before, each cell in the simulation has two variables assigned to it: ARefrac, which determines whether or not an atrial impulse can pass through the cell; and VRefrac, which determines whether a ventricular impulse can pass through. I solved the AV-realism problem by simply introducing a global variable called AVRefrac that determines whether or not the AV node can accept an impulse. Basically, every time a simulated electrical “spark” strikes the simulated node, as long as AVRefrac is equal to or less than zero, it sets AVRefrac’s value to a user-specified constant I call AV-delay. So, basically, now the ventricles can only respond as fast as the AV node will allow, just like a real heart! When I saw how beautifully my little fix had worked, I was thrilled!

So, my simulated heart is now more realistic than ever. For example, I did a few runs with the refract-length value (the value that determines how quickly cells recover their ability to fire after each firing) set very short so that arrhythmias would occur frequently, so that I could study their effects. Before long, my simulated heart went into atrial flutter/fibrillation (a condition where the small pumping chambers at the top of the heart expand and contract quickly and chaotically, often leading to a dangerously fast ventricular rate. I was amazed to see something very similar to the many atrial-fibrillation EKG’s I’ve looked at:

(Note: in the simulated EKG, I’ve separated the atrial and ventricular signals, since whenever the ventricular rate got very fast, it obscured all the atrial activity, and I wanted to be able to study the atrial activity as well)

Given my tendency towards oversimplified simulations that produce peculiar behavior, the resemblance this bears to real supraventricular tachycardia (fast heart rate caused by the atria, which is often seen in atrial flutter or fibrillation) was frankly, surprising. After about half a second of atrial flutter, the atria begin to fibrillate, producing that classic irregular ventricular response.

Note the extremely high ventricular rate that shows up towards the end of the ECG. That’s a rather unrealistic product of my simulation, since whenever one of the waves of excitation collided with the back of a previous wave, it had a tendency to collapse into a tachycardic or fibrillatory spiral.

There are some forms of supraventricular tachycardia that terminate on their own. They’re called “paroxysmal” supraventricular tachycardia, and my simple little simulation actually managed to produce a run of it!

Some forms of atrial fibrillation occur in the presence of heat block (which, in its most common form, is basically a very slow AV node that doesn’t conduct every impulse that passes to it). In those cases, the fibrillation is frequently asymptomatic or minimally symptomatic, since the heart doesn’t end up racing. When I set the AV-delay parameter higher than usual, I observed this very same phenomenon.

Eventually, the aforementioned wave-collision problem had become annoying enough that I decided to re-write part of the simulation so that there was a small probability that an electrical spark could actually cross a cell that had not entirely recovered. That solved a lot of my problems.

In the re-written simulation, atrial fibrillation still produces that classic irregular ventricular heartbeat, and this time, since the waves are more collision-tolerant, the behavior doesn’t immediately degenerate into ventricular fibrillation, which gives me a chance to actually study it properly.

More updates as they’re warranted. And for those reader(s?) who are wondering what the hell has been wrong with me lately, don’t worry, I’ll be turning the blog over to my old cynical, sarcastic self very shortly.


I was sitting around without much to do, so I opened up SimHeart and let it run in the background. When I checked in on it again a few minutes later, I’d discovered some very interesting behavior:

Apparently, some of the standard sort of atrial fibrillation had started, then, spontaneously self-organized into a coordinated wave spiraling cyclically through the atria. You can see the wave in the screenshot.

This really grabbed my attention, so I watched it for a while, and discovered that, strangely enough, the wave was quite stable.

Not even the normal sinus beats, which occasionally inserted themselves in the path of the wave, were very good at disrupting it. Not long after this screenshot, it degenerated rather suddenly into normal atrial fibrillation.

Then, while having a look at the pictures a few minutes later, I realized something: my simulation had produced true atrial flutter. What I saw before and called atrial flutter was really just organized fibrillation. This, though, exhibits all the classic features of atrial flutter: rapid atrial waves with a sawtooth shape. In this case, since I had the ventricular response set to be fairly quick, it turned into quite realistic atrial tachycardia.

I tried to save the state of the simulation so that I could study it later, but as there are some features of NetLogo with which I’m not entirely familiar, I wasn’t able to do it. So, for now, I guess I’ll just keep running HeartSim in the background until I see that rhythm again.

The Simulated Heart

For the past few months, I’ve been playing around in a program called NetLogo that allows you to simulate agent-based emergent systems with pretty much no effort whatsoever. Being something of a cardiology nerd, I had the idea a while ago to build a vastly simplified model of the electrical conduction in the heart. With the Belousov-Zhabotinski reaction in mind — which appears in James Gleick’s excellent book Chaos, which also has a chapter on the electrical waves of the heart, which is probably what sparked my inspiration in the first place — I set out to build a simple electrical-wave model. What I got was competent enough. At the beginning of the simulation, a single “spark” in the center of the simulation grid would produce a radiating wave. Sometimes, a defect on the front of that wave would cause the wave to dimple, and then to curl in on itself, producing a self-sustaining oscillation.

Some time later, when I began to get interested in cardiology — especially cardiac arrhythmias — I began to realize that my simple little model produced some behavior that was actually strangely comparable to that of a real heart. So, I modified it to be even more heart-like. I programmed the simulator to generate an initial spark at the center of the grid every fifty time steps or so. As I watched the waves propagate across the screen, I was, frankly, mesmerized. For a while, the waves would march along. Then, one of them would go wobbly, curl in on itself, and start oscillating rapidly. It bore a great deal of similarity to some of the real computer simulations of electrical activity in the heart that I’d seen, specifically to those that generated ventricular tachycardia.

With this as an impetus, I spent many hours revising and playing with my system. I recently downloaded the newest version of NetLogo, and decided that it was time to re-write the heart simulator, which had been mangled and cluttered beyond recognition by the process of incremental revision — something that happens to most of my programs.

This newest version — Version 3, by my count — is my most complete yet. A simulated beat travels through the simulated atrium (in the simulation, the atrial activity is represented by yellow waves), then hits the simulated AV node — the part of the heart’s conduction system that connects the atria (the upper chambers) and the ventricles (the lower chambers) — hangs around for a moment, then starts to propagate as a new wave (this one red) through the simulated ventricles. I’ve observed quite a lot of fascinating and remarkably heart-like behavior in my simple model. I’ll run through some of it here.

This is the main screen. All those buttons and sliders set up the simulated heart’s various parameters. If you can see it in this image, the “refract-length” slider controls how quickly the cells become able to fire again after each firing. The quicker that interval, the more easily the heart will go into fibrillation. That’s why I built in the handy little “defibrillate” button you can see to the right of the display.

As I did a run of the simulation to produce an image for this post, I was lucky enough for the simulation to do something interesting almost immediately. Note the oddly distorted third beat. That’s actually the result of an extra breakaway wave in the simulated ventricles. In real life, we call things like that “palpitations” or “premature ventricular contractions.” When the model’s heart rate is faster, you can actually observe the compensatory pause that comes after most premature ventricular contractions.

A final note on this image: in order to make the pretty EKG-like display, I had to cheat a little. In reality, the small waves represent just as much activity as the large ones (since the two grids are exactly the same size) but since the atria are a lot smaller than the ventricles in a real heart, I thought it would be a good idea to de-emphasize their activity a bit. This also has the benefit of making my fake EKG look a lot more like a real one. I’m currently working on a way to make the conduction in the simulated atria more realistic.

Here, the simulated heart degenerates into the deadly arrhythmia known as ventricular fibrillation. In this often-fatal arrhythmia — which is the primary cause of sudden cardiac arrest syndromes — the ventricles, which represent the majority of the heart’s mechanical pumping power, simply begin to wiggle and wobble randomly, rather than beating in an organized fashion. The result is that no blood gets to the body and the brain, and death results in about ten minutes.

I observed quite a lot of fibrillation of one kind or another in my simulated heart. Since I intentionally set the refractory time short — that is, the cells recovered their firing ability quickly — the waves had a strong tendency to curl in on themselves and break up into spirals. These spiral waves quickly degenerated into clusters of randomly-oscillating cells. About two-thirds of the way through the run, you can see that the fibrillation suddenly stops. That was the result of me pressing the “defibrillate” button, which sends ninety percent of the cells into the refractory phase, unable to fire until they recover. Towards the end, you can see that the “normal sinus rhythm” returns.

I’m actually quite pleased with this little simulation. It wasn’t terribly hard to build — then again, nothing in NetLogo is — and it produces interesting results. Here are my current goals for it:

  • Change some of the parameters so that they better reflect the physical disparities between the atria and the ventricles.
  • Improve my model of the AV node so that it discharges more realistically. Currently, it simply causes the electrical particles to pause for a moment, after which they are released. This means that, unlike the real AV node, my simulated one has a “memory,” and rather than discharging all at once like the real version, it simply discharges in the same order as the pulses that strike it.
  • Incorporate some kind of system to simulate damage to the heart.
  • Refine the electrical model so that the simulation is capable of producing ventricular tachycardia — a dangerous but more organized cousin of ventricular fibrillation in which a single self-sustaining oscillating spiral causes the ventricles to contract too fast to pump effectively. At the moment, the simulated ventricular tachycardia tends to degenerate into ventricular fibrillation almost immediately, making it difficult to study.
  • Make the translation between the simulated heart and the simulated EKG more realistic, so that it produce something more like a real EKG.
  • Make the model more analog. I’m hoping that this will solve a lot of my problems, but it’s probably going to be one of the hardest features to implement, with the possible exception of the better AV node simulation.

I’ll post updates as they come, and I soon hope to have a Java version of the simulator uploaded so that other people can play with it.

Random Syntax

Well, since I seem to be on something of a roll with this whole random-word-generator thing, I thought I’d write another post about it, detailing my current progress.

I took the algorithm that I used in my previous post and tricked it out with a random-syntax generator. In a nutshell, here’s how the new algorithm works:

  1. Creates a list of phonemes using both consonant-vowel and vowel-vowel pairs.
  2. Adds a very few three-letter phonemes, isolated consonants, and random accented characters for flavor.
  3. Creates a “syntax matrix” with dimensions equal to the size of the phoneme “dictionary,” and populates this matrix with random zeros and ones. This forms the syntax that determines how words may be constructed.
  4. Builds words by adding phonemes to the current word, but only if the new phoneme is allowed (by the syntax) to come after the previous one.

The syntax matrix probably needs some more explanation, so, I’ll explain by example. Here’s an output sample from the algorithm:

[‘MU’, ‘Ô’, ‘TA’, ‘TO’, ‘CU’, ‘VE’, ‘GA’, ‘QO’, ‘PU’, ‘RU’, ‘FIP’, ‘XA’]
[0, 1, 0, 0, 1, 1, 1, 1, 0, 0, 1, 0]
[0, 0, 0, 0, 0, 0, 1, 0, 1, 1, 1, 1]
[0, 1, 0, 1, 0, 1, 1, 1, 0, 1, 1, 1]
[1, 1, 1, 0, 1, 1, 0, 1, 0, 1, 0, 0]
[0, 1, 1, 0, 0, 0, 0, 0, 0, 1, 0, 1]
[1, 1, 1, 1, 1, 0, 0, 1, 1, 0, 1, 1]
[0, 1, 0, 1, 1, 0, 0, 0, 0, 1, 0, 0]
[0, 1, 0, 1, 1, 0, 0, 0, 1, 0, 1, 1]
[0, 0, 1, 1, 0, 1, 1, 0, 0, 0, 0, 0]
[1, 1, 1, 0, 0, 1, 1, 1, 0, 0, 0, 1]
[0, 0, 0, 1, 0, 1, 1, 1, 0, 1, 1, 0]
[0, 1, 1, 1, 0, 0, 1, 0, 1, 0, 0, 0]

The list at the very top is the phoneme dictionary. And the two-dimensional array that follows is the syntax matrix. What the matrix does is, as I said before, determine whether the phoneme combination XY is allowed. Please note that XY and YX are, in this case, treated as completely different, and one of these may be allowed while the other is not. For a more concrete example, the first word in the list is VEQOXA. The first two phonemes are VE and QO. VE is the sixth entry in the dictionary (it’s in the fifth position, because Python arrays initialize from zero), and QO is the eighth (seventh position). To determine whether this combination is allowed, the algorithm goes to the sixth row, and then to the eighth entry in that row. Since that entry is a 1, the combination VE-QO is allowed. This is very much like the fact that, in English, we allow the letter combinations ABA and ABR, but not PQK. (Note: the intransitivity of the syntax matrix is actually demonstrated by this word. The combination VE-QO is allowed, but if you take the time to look it up, you will find that QO-VE is not allowed).

I had a lot of trouble getting this fiddly little bastard of an algorithm to work, so there are some peculiarities. For example, since Python doesn’t have anything resembling “goto”, if the randomly-chosen phoneme was not syntactically allowed to be added to the new word, then instead of going to select a different one, the algorithm simply gives up, adds nothing, and starts the whole procedure again. The result is that some of the words are much shorter than they should be. Hopefully, my Python skills will someday improve to the point that I can solve this irritating problem. I’d also like to modify the procedure so that no duplicate phonemes are allowed, since the two phonemes would likely have different syntactical relationships to the other phonemes, and so would build words that appear to violate their own syntax. (Actually, duplicate phonemes might create some interesting little idiosyncrasies that would make the words even closer to real language. So I guess I’ll leave duplicate phonemes alone).

Here’s a larger sample of what the current version of the random word generator is producing, conveniently formatted for your viewing pleasure:


P.S.: A thousand points to anybody who can pronounce all of these!

More Random Words

Yesterday, I wrote a post about generating completely random names/words from a character set. Well, out of boredom, I’ve refined my algorithm, and, somewhat to my surprise, it now produces names/words which are not only interestingly alien, but actually phonetically consistent:


What I did was to modify the original algorithm so that, rather than just sticking together random letters cushioned by a vowel every other letter, it creates a list of possible two-letter phonemes (in addition to an occasional sprinkling of extra characters), and builds the words from those phonemes. This is good in that it produces pseudo-consistency in the words I generate (it’s not real consistency like you find in language, since there are no rules (not yet, anyway) for how phonemes can and cannot fit together). The only downsides are that: 1) the generator now very rarely produces three-letter words, and 2) the generator never produces inanely amusing things like “FUKER” and “CUJO'” from the previous list.

If I ever optimize it to a degree with which I’m satisfied, I’ll be sure to post the Python source code.

P.S.: Sorry about the lack of real content lately. The holiday season’s made me sluggishly overfed and rather lazy.

Randomness is Fun!

I was fiddling around again in Python, and I decided to throw together a little random word generator. Trying to be as culturally diverse as possible, I was sure to include the apostrophe as a possible character, in addition to the ever-awesome alveolar click (written as “!”). I discovered that randomness can be rather amusing, as well as being useful for producing incredibly unfamiliar names. I conjecture that these would look odd to the speaker of pretty much any language on Earth, since the phonology is determined entirely at random.

Without further ado (or further clichés), I present you with one hundred completely random names. (Note: The algorithm I used ensures that there is a vowel at least every other letter, because I got tired of ending up with garbage like SPDQGXL)


I also found that throwing in all the accented characters that I could find made some amusing pseudo-European names:



Hebbian Neural Networks

Against all odds, I’ve started an A.I. project, and have actually made some progress. I never thought I’d see the day. It’s not a whole lot of progress, but when you’ve been tinkering as long as I have, you learn to take what you can get.

What I’ve got is a fairly simplistic neural network model, utilizing Hebbian learning. That is to say, whenever two neurons in the network happen to be switched on at the same time, their connection gets stronger. For the last week or two, I’ve been tinkering with the parameters and different methods of inputting the data, and I finally have something that performs something roughly like learning.

I feel the need to repeat that last part: roughly like learning. I have no idea if it’s actually learned anything. Sometimes when I test it, it seems to be able to predict simple patterns, and learn how to tell a small prime from a small non-prime. Other times, it becomes so profoundly stupid that it actually anti-learns, refusing to respond to any stimulus even remotely like its training data. And at yet other times, it doesn’t do anything at all.

This last bit is worsened by my habit of taking a perfectly good program, tinkering with it until it becomes unusable, then accidentally saving over the original. The fact that I wrote the program in Python makes that all the worse, since with Python, you have to save the program every time you run it, and I’ve gotten into the bad habit of just pressing F5 without making sure I’ve saved a backup. The end result is that the current version is pretty much nonfunctional.

Still, the very fact that I was able to write an implement a neural network model makes me pretty happy. I’ve always had trouble handling networks, and now it seems that I’ve got something vaguely workable. So, without further ado (or further clichés), I present to you (okay, one more cliché) Hebbian v5.0 (be warned: there is quite a lot of garbage code and artifacts in there, and frankly, I’m too damn lazy to take it out. Hey, if the human genetic code can be full of junk DNA, then why can’t my code?):

(Written in Python 2.43 (I think))

#Hebbian, version 5.0 #
#Written by Asymptote. #
#Feel free to modify and distribute this code (I dont’ know why you’d want to, #
#but hey, whatever makes you happy), as long as you keep this header intact. #

import random
import math

connectivity = []
activation = []
ns = 100

for i in range(0,ns):

temp = []

for i in range(0,ns):
temp = []
for j in range(0,ns):

def sign(n):
if n == 0:
return 0
sg = abs(n)/n
return sg

def transmission(act,conn,nsize,thresh):
summ = 0
for a in range(0,nsize-1):
summ = 0
for b in range(0,nsize-1):
summ += act[b] * conn[a][b]
if float(summ)/float(ns) > thresh:
act[a] = 1
act[a] = 0

def hebbian(act,conn,nsize):
for a in range(0,nsize-1):
for b in range(0,nsize-1):
if act[a] == act[b] == 1:
conn[a][b] += sign(conn[a][b]) * 0.1
for a in range(0,nsize-1):
for b in range(0,nsize-1):
conn[a][b] -= sign(conn[a][b]) * 0.01
for a in range(0,nsize-1):
for b in range(0,nsize-1):
if conn[a][b] > 1:
conn[a][b] = 1
if conn[a][b] < -1:
conn[a][b] = -1

def run(act,conn,nsize,thresh,runlength):
for i in range(0,runlength-1):
print act

def actprint(act,nsize):
strg = “”
for a in range(0,nsize-1):
if act[a] == 1:
strg+= “#”
strg += “_”
print strg

def connprint(conn,nsize):
printarr = []
tempstr = “”
for a in range(0,nsize-1):
tempstr = “”
for b in range(0,nsize-1):
if abs(conn[b][a]) > 0.5:
tempstr += “#”
tempstr += “_”
for a in printarr:
print a

def striphex(i):
if s<16:
return h

from Tkinter import *
root = Tk()
w = Canvas(root,width=1000,height=1000)

def Binary(n):
out = “”
x = n
while x > 0:
out = str(x % 2) + out
x = (int(x / 2))
return out

def make_input(n,ml):
bin = Binary(n)
inarr = []
for i in range(0,len(bin)):
while len(inarr) <= ml – 1:
inarr = [0] + inarr
return inarr

def drawnetwork(numnodes,connectivity):
import random
points = []
for i in range(0,numnodes – 1):
for i in range(0,numnodes – 1):
for j in range(0,numnodes – 1):
if abs(connectivity[i][j]) > 0.1:
if i == j:

def drawconn(numnodes,connectivity):
for a in range(0,numnodes-1):
for b in range(0,numnodes-1):
xor = {“00″:0,”01″:1,”10″:1,”11”:0}

def isprime(n):
for i in range(2,n-1):
if n % i == 0:
return False
return True

primelist = []

for i in range(2,1000):
if isprime(i) == True:

for i in range(1,1000):
#activation[primelist[i]%ns] = 1
#activation[(ns – primelist[i])% ns – 1] = 1
for a in range(0,ns-1):
if (a + i%2)%10 == 0:
activation[a] = 1
#print “***”

print “*”*100

#Good threshold = 0.25



SymbolicAI Update

Well, the SymbolicA.I. project has completely stalled, since life trumps software (I’m pretty sure the International Brotherhood of Geeks is going to take away my pocket protector for saying that), and since I’ve decided that I don’t like C++ (the IBoG is probably going to take my calculator now, too). So, even if the project is still alive, its incarnation will probably be written in Python (with which I have a great deal more experience), if it doesn’t die altogether.

Just thought I’d keep everybody posted.

SymbolicAI — Update 1

I have now built a skeleton (well, perhaps the skeleton of a skeleton) of SymbolicAI’s symbol-manipulation code. Here is the program, Version 0.1:

SymbolicAI V0.1 by Asymptote
August 2007

Fiddle about with this as much as you want, but I make no promises as to its stability
or usefulness. If you do fiddle with it anyway, you must leave this header intact.

//The Symbol object. These objects will represent bits of code.
class Symbol
int getIndex(); //returns the index of the symbol. The index is a way of identifying a symbol in a specific position.
int getCode(); //returns the code of the symbol. The symbol’s code identifies what kind of symbol it is. That is, what it does.
void setIndex(int newIndex); //changes the symbol’s index. This is akin to moving it.
int myIndex;
int myCode;

int Symbol::getIndex()
return myIndex;

int Symbol::getCode()
return myCode;

void Symbol::setIndex(int newIndex)
myIndex = newIndex;
//End of the symbol object block.

//The Word object. These objects will represent groups of symbols that form functional procedures.
class Word
int getIndex(); //The index of the word. See the comment on the Symbol object’s getIndex() method.
unsigned int getSize();
void setIndex(int newIndex);
void setSize(unsigned int wSize);
int myIndex;
unsigned int mySize;

int Word::getIndex()
return myIndex;

void Word::setSize(unsigned int wSize)
mySize = wSize;

void Word::setIndex(int newIndex)
myIndex = newIndex;

unsigned int Word::getSize()
return mySize;
//End of the Word object block.

#include <iostream>
using namespace std;

int main ()
Symbol S1;
cout << S1.getIndex();

Word W1;
cout << endl;
cout << W1.getIndex() << “,” << W1.getSize();
return 0;

So far, all it does is allow the initialization of “Symbol” objects and “Word” objects, and the assignment of indexes and sizes to them. All the main() function does is initialize some test objects and print off their properties to the console.

Symbolic A.I.

The other day, I was bored, and so I started clicking on the StumbleUpon button on my Firefox toolbar (StumbleUpon is a neat little free plugin that sends you to random user-discovered webpages. Good if you’re bored and you want to discover something new, but also dangerously addictive). Somehow, I ran into a production journal for a program called PixelMachine, a raytracer programmed in C++, and put together over a weekend. It’s really quite impressive. That inspired me to start learning C++.

Today, boredom struck again. After clicking “Stumble” for a few hours, hoping to find something amusing, a thought occurred: why not start a similarly impressive project? What, though, could I possibly do? I’m still far too much of a C++ novice to program anything like that. I also didn’t want to do anything derivative. So I thought, why not a symbolic A.I. program?

I don’t know if “symbolic A.I.” is a pre-existing term, but even if it is, I’ll re-define it here (try and stop me! I dare ya’!). In my mind, symbolic A.I. is a sort of self-modifying computer program. The symbols involved represent little bits of computer code. This code, though, can modify and extend itself, in a very recursive, loopy sort of way. This is how I imagine the program’s learning process:

  1. The program is primed with a bit of code, perhaps a code for giving a reply to a yes/no question.
  2. The program is run. The program “mutates” and self-modifies at runtime.
  3. The program is run again, and user feedback (or computer-generated feedback) determines the fitness of the program’s new version
  4. The information from Step 3 is used to “push” the program in a certain way.
  5. Go back to Step 2.

Thus begins what will likely be a blundering saga: the SymbolicAI program-development journal! Not only will this track the development of SymbolicAI itself, but my growth as a C++ programmer. You, dear reader, will follow me as I thumb through references, hunt down tutorials on the Internet, and reverse-engineer and study other C++ programs.

And so it begins. I start with this:

#include <isotream>
using namespace std;
int main ()
return 0;

Who knows where I’ll end up? To work!

Ultrafunctions Revisited

I was contemplating the properties of the ultrafuncton (see the previous post), and I was wondering if there might be some higher-level version of the ultrafunction, and I realized that the ultrafunction itself is that function. For example:

u(f(a), n) = f(f(f(f(…a…))) (f applied n times)

But then, it follows naturally that

u(u(f(a), m), n) = u(u(u(…f(a)…), m), m) (ultrafunction applied n times)

So therefore, even though in many nested-function-type situations, there is an infinite hierarchy of such functions (consider Knuth’s up-arrow system), in this case, this single function fills in the hierarchy all by itself.

Superfunctions and Ultrafunctions

Okay, so my first mathematics-related post was a bit of a letdown…but I think I’ve recovered some of my mathematical street cred (yes, mathematicians have street cred; how do you think they get into all the elite nightclubs?).

Suppose you have a function f(a, b) . I then define the “superfunction” s(f) of f(a, b) as:

s(f(a,b)) = f(a, f(a, f(a, … a))) (repeated b times)

Thus, s(a+b) = a * b, and s(a * b) = a ^ b, et cetera.

Consider also the “ultrafunction” u(f(a, b), c)of f(a, b) (of which the superfunction is a special case where c = 1; see below) thus:

u(f(a, b), c) = s(s(s(…f(a,b)…))) (c repeats)

u(a+b, 1) = a * b

u(a+b, 2) = a ^ b

s(f(a,b)) = u(f(a,b), 1)