The overall goal of this project was to write a program which could animate an alterable image using the graphics module. In other words, we all set out to make overally ambitious animations using an antiquate, simplistic animation module. My ambitions like the story of Icarus flew a little bit to high and were similarly crushed in a deadly plummet to the earth. I set out with the original idea of making a three layer positron neural network which would be accompanied by an animation that mimicked the simulated network's behavior.
This ambition was first flustered by my original project. The idea of making neural network was not too daunting but one that could do math (as I had wished it to do) was a bit more tricky than I had originally anticipated. A neural network works through a process known as parallel processing in which information is converted into simplistic binary relations which can then be represented and processed by neurons which, themselves produce their own binary, all or nothing, outputs. By manipulating the weights or connections between different neurons, a neural network can be made to process complex information and produce a consistent and purposeful output. This is great in theory but not so easy to emulate in practice. The first problem was that it seemed there was no simple way to process the information that was given to the network. With four different mathematical symbols, and a multitude of numbers creating an efficient network model was going to be complex. Then, of course, there was the trouble of putting together a table of weights (a collection of numbers which represent the connection potential between different neurons) that could actually preform the mathematical tasks consistently. Traditionally this could be accomplished by instituting an assisted learning algorithm which would change the weights of the network depending on whether or not the network had an accurate output. To put it mildly I did not have the time, energy, nor the programming know-how to accomplish such a task. This meant that I had to pre-assign the weights values so that an accurate output could be reached. The only way to do this was to significantly dumb down my program. What I ended up with was a very simple two layer neural network which could only add two numbers between 0 and 5. This was accomplished by making use of lists which could then be manipulated and later interpretted as the program proceded. Here is the code:
inputs = [int(argv),int(argv)]
Weights = [[1,.5,.333333334,.25,.2],[1,.5,.333333334,.25,.2],[1,0],[0,1]]
Outputs = 
layer1 = 
Answer = 0
if inputs*Weights >= 1:
elif inputs*Weights <= 1:
if inputs*Weights == 0:
if inputs*Weights[-1] >= 1:
elif inputs*Weights[-1] <= 1:
if inputs*Weights[-1] == 0:
for L1 in range(5):
if layer1*Weights[L1] >= 1:
for L2 in range(5):
if layer1*Weights[L2] >= 1:
for M in range(10):
Answer = Answer +Outputs[M]
Essentially this code assigns five neurons to each of the two input neurons. The input neurons simply route the information given to them to the five connected neurons whose connections determine the given values by dividing them by a pre-assigned number.
Though I was not completely deterred by the complications that I had faced in constructing my neural network, my ambitions had certainly been curbed and so I tried to keep the rest of the project simple. This was, of course, not to be.
In order to accomplish something that even mildly resembled a network I combined a series of basic shapes such as circles and boxes redefined them in a separate file marked theshapes.py and then proceeded to define more complex shapes with them. Probably the most complex was the actual neurons which involved a circle within a circle which was surrounded by edgy rectangles which would be randomly generated as follows:
"if scale == 1:
for edges in range(15):
relement = scale*(random.uniform(x-.8*r,x+.8*r))
relement2 = scale*(-math.sqrt(r*r+(x-relement)*(x-relement))+y)
theshapes.box(relement,relement2+r+.75*r,(0.3+random.random())*r,.1*r, scale, color,win)
for edges in range(15):
relement = scale*(random.uniform(x-.8*r,x+.8*r))
relement2 = scale*(math.sqrt(r*r+(x-relement)*(x-relement))+y)
theshapes.box(relement,relement2-r-.75*r,-(0.3+random.random())*r,.1*r, scale, color,win)"
It is important to note that if scale == 1 clause which was added later because the rectangles would add in funny and often incomprehensible ways when the scale was changed even slightly.
Neural Network 1, "3+2"
Finally my basic structure was complete and using the graphics move function I had successfully been able to create 'neurotransmitters' move mechanically from my axons to my second layer of neurons. Unfortunately instead of moving in unison as I had imagined the transmitters would move one by one with pause in between. This being said, thanks to Stephanie's genius, I did manage to create a spiffy background (as seen above) which would repeat the string containing the two numbers that the user had entered intending to add by simply adding the argv inputs together with a "+" in between. This was accomplished as follows,
"for i in range (int(25*scale)):
for g in range(int(35*scale)):
Text = gr.Text(gr.Point(i*20-10 +x*scale, g*20-10+y*scale), argv""+argv)
This was especially neat because it made direct and novel use of the users inputs.
Finally, I decided that instead of just displaying the answer produced by the simulated network in the terminal window, I would include it in my final animation. In this way, after my transmitters had transmitted and a few of my neurons flash the answer corresponding to the outputs of the neural network would be displayed by lighting up the equivalent number of layer two neurons -- this is why some of the neurons are light blue in the above picture (5 or 3+2 of them to be precise). This feature was accomplished by a series of if statements which referenced the answer that the network gave to the math problem in order to decide how many neurons to color differently.
It may be noted that the random elements of the neurons were simply painted over by a new set of random elements because I have kept the draw function contained in the original neuron function. Though I could have moved the color fill out of this early function, I thought that the achieved effect was cool and decided to maintain its current state.
Neural network 2
This final picture simply displays the final task which was to demonstrate the scalability of the program. Because I used the x, and y, inputs of init() to simply add or subtract from the x and y value at the origin it should be noted that the scale feature effects the x and y feature; as in if you want to put the display at point (100,100) and scale the image by a half you would need to input a 200 x value and a 200 y value. Nonetheless this is simply a mistake of design and the scalability is consistent throughout the model otherwise.
As has already been noted several of the extensions were accomplished in my endeavors, namely the scene is drastically effected by command line elements. In particular the two numbers that the use chooses to add. It should also be noted that several lists were used between my different functions in order to accomplish the complex interaction between the simulated network and its animation.
Probably the most avid lesson that I can take from this project is to curb my ambitions into bite size pieces. I started out having imagined that my simulation would achieve something like this:
and merely accomplished some fuzzy lines connecting a few blue circle.
All kidding aside, this project taught a great deal about how to work and interact with the graphics module and also the overall use of lists and strings to condense information into easily alterable and manageable items.
The main function and the complex objects can be found in my private folder under project6/ NNetwork1.py and theshapes.py, Scene2.py can both be found here as well.