So it's late here, and my google skills seem to be failing me. I've found some great responses on SO before (time and time again), I thought you guys could help.
I have a neural network I'm trying to run in native objective-c. It works, but it's too slow. These networks are not recurrent. Each network I run about 20,000 times ( 128x80 times, or around that). The problem is these networks really just boil down to math functions (each network is a 4 dimensional function, taking x,y,dist(x,y),and bias as inputs, and outputting 3 values).
What I want to do is convert each network (only once) into a function call, or a block of code at runtime in objective-c.
How do I do this? I could make a big string of the math operations that need to be performed, but how do I go about executing that string, or converting the string into a block of code for execution?
Again, my late night search failed me, so sorry if this has already been answered. Any help is greatly appreciated.
-Paul
Edit: Aha! Great success! Nearly 24 hours later, I have working code to turn a neural network with up to 4 inputs into a single 4 dimensional function. I used the block method suggested by Dave DeLong in the answers.
For anybody who ever wants to follow what I've done in the future, here is a (quick) breakdown of what I did (excuse me if this is incorrect etiquette on stackoverflow): First, I made a few typedef's for the different block functions:
typedef CGFloat (^oneDFunction)(CGFloat x);
typedef CGFloat (^twoDFunction)(CGFloat x, CGFloat y);
typedef CGFloat (^threeDFunction)(CGFloat x, CGFloat y, CGFloat z);
typedef CGFloat (^fourDFunction)(CGFloat x, CGFloat y, CGFloat z, CGFloat w);
A oneDFunction takes the form of f(x), twoD is f(x,y), etc. Then I made functions to combine two fourDFunction blocks (and 2 oneD, 2 twoD, etc, although these were not necessary).
fourDFunction (^combineFourD) (fourDFunction f1, fourDFunction f2) =
^(fourDFunction f1, fourDFunction f2){
fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
return f1(x,y,z,w) + f2(x,y,z,w);
};
fourDFunction act = [blockToCopy copy];
[f1 release];
[f2 release];
//Need to release act at some point
return act;
};
And, of course, I needed to apply the activation function to the fourD function for every node, and for each node, I would need to multiply by the weight connecting it:
//for applying the activation function
fourDFunction (^applyOneToFourD)( oneDFunction f1, fourDFunction f2) =
^(oneDFunction f1, fourDFunction f2){
fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
return f1(f2(x,y,z,w));
};
fourDFunction act = [blockToCopy copy];
[f1 release];
[f2 release];
//Need to release act at some point
return act;
};
//For applying the weight to the function
fourDFunction (^weightCombineFour) (CGFloat x, fourDFunction f1) =
^(CGFloat weight, fourDFunction f1)
{
fourDFunction blockToCopy = ^(CGFloat x, CGFloat y, CGFloat z, CGFloat w){
return weight*f1(x,y,z,w);
};
fourDFunction act = [blockToCopy copy];
[f1 release];
//[act release];
//Need to release act at some point
return act;
};
Then, for each node in the network, I simply applied the activation function to the sum of the fourD functions from the source neurons multiplied by their connection weight. After composing all those blocks, I took the final functions from each output. Therefore, my outputs are separate 4D functions of the inputs.
Thanks for the help, this was very cool.