8

I came across Alex Wissner-gross and his theory of intelligent behavior in his Ted Talk linked Here. I have tried to read the scholastic paper linked Here, which is associated with his presentation, but I don’t have enough comprehension of the math and physics to really understand what’s going on, and more importantly, how I can reproduce this equation in python.

There are a couple unique models for entropy maximization I found that are implemented in python, but I don't know how to set them up and whether they are identical to Wissner-gross’s equation.

Scipy: MaxEntropy

MEMT: Tutorial | Homepage

Assuming these equations are different forms of Wissner's equation and using a library above or some other library, how do I set up an entropy maximization algorithm.

Particularly,

  • how I initialize the entities subject to change
    • (like the circles in Wissner’s simulations).
  • How I feed the model the different options for action
    • (like the movement of entities in the models closed system).
  • How I set up information about actions that produce constraint in certain contexts
    • (equivalent to the bounding boxes in the simulations, and inability to move past them).
  • Other variables and process the equation necessitates.
James Beezho
  • 491
  • 4
  • 12
  • It's worth noting that these types of questions aren't on-topic for Stack Overflow. I'm not going to give a close vote because I think the question is better than a lot of the other supposedly-valid questions that the Python tag gets, but I imagine some others will. Basically, the phrase "grateful for suggestions on libraries" is not appropriate for this site, although you can try chat or other resources (like the Python mailing list). – Veedrac May 20 '14 at 04:35
  • +1 for thought and clarity put into this question. I also don't have the required level of comprehension at this time to even attempt to answer :) – James Mills May 20 '14 at 04:42
  • @Veedrac, I changed the phrasing. Although this is a request for a list of things, all items on the list pertain to the I/O of an algorithm. I have researched Quantum Mechanics and Entropy for about two days now, but the subject is deep, and I'm most concerned with the I/O aspects of the algorithm. – James Beezho May 20 '14 at 05:07

2 Answers2

5

The question is quite general, and unfortunately I don't think this answer will give you as much of a solution that you may had hoped for.

First of all, it seems that your assumption that "these equations are different forms of Wissner's equation" is a bad one.

Having browsed through the paper, it does seem that the model for what they refer to as causal entropic force (F) share some components with the maximum entropy models (not surprisingly), for which you have found some libraries. However, to see how these libraries could be used in an implementation of causal entropic forcing, you will have to look at the paper and find how the different expressions match/share components. I doubt anyone in here will do that for you. The Wikipedia article about maximum entropy may help you a little bit to find the relation.

To get started with the animation and movement, I suggest you find some introduction to sprite animation, for example this one. This will help you get a sense of how to move objects around in a space using code.

Edit

The paper's supplemental material is definitely worth a look as well, even containing some pseudocode. Also, reference [12] in the paper reads as follows:

Our general-purpose causal entropic force simulation software will be made available for exploration at http://www.causalentropy.org

Community
  • 1
  • 1
Henrik
  • 4,254
  • 15
  • 28
0

I tried to apply the ideas from that talk, in somewhat simplistic manner.

My imaginary system had one variable, with added uniform noise and positive feedback. Think of angle of a stick being balanced on a stick in gravity field. The evolution of my imaginary system in one tick of time was described as

def simulate1(theta):
    # introduce evil random displacement
    theta = theta + random.uniform(-noise, noise)

    # apply stipid physics laws
    f = math.sin(theta)
    theta = theta + f*dt

    return theta

#
# simulate evolution of the system during so many ticks
#
def simulate(theta, ticks):
    thetas = []
    for _ in range(ticks):
            theta = simulate1(theta)
            thetas.append(theta)

    return thetas

If I run this simulation, theta quickly goes to PI or -PI and fluctuates there.

Now I introduce the notion of kicks (here we either do nothing or kick the system left or right, 5 times harder that noise does):

kicks = [0, -5*noise, 5*noise]

Finally, we come to the main loop. On each iteration the following contraption considers a possibility of giving the system a kick and calculated a metric (hopefully) correlated to variety of possible future (not Entropy as in the original question):

while True:
    best_kick = None

    for kick in kicks:
            median_var = simulate_median_var(theta + kick)
            if (best_kick is None) or (median_var > best_median_var):
                    best_median_var = median_var
                    best_kick = kick

    print "theta=%f\tbest_kick=*\t\tbest_median_var=%f" % (theta, best_median_var)
    theta = theta + best_kick

    theta = simulate1(theta)

And here is actual implementation of metric:

#
# estimate variation of possible future
# assume the variation is higher is standard deviation is higher (is it good one?)
#
def simulate_var(theta, ticks):
    thetas = simulate(theta, ticks)
    (theta_hist, _) = numpy.histogram(thetas)
    #print "# %s" % theta_hist
    return numpy.std(theta_hist)

# calculate median of the variaion for so many rounds
def simulate_median_var(theta):
    vars = []
    for _ in range(rounds):
            var = simulate_var(theta, ticks)
            vars.append(var)
    return numpy.median(vars)

First it calculates probability distribution of possible system states. Then we use numpy.hist() sort whole history evolution of theta in 10 bins. Then we calculate a standard deviation over all bins. It is not necessarily the best metric, but it seems work as a ballpark estimate.

Here is how output (with some extra debugging info) looks like:

theta=0.000000  best_kick=0.000000  best_median_var=16.443844 # [(-0.005, 12.13260071048248), (0, 16.443843832875572), (0.005, 12.13260071048248)]
theta=0.000328  best_kick=0.000000  best_median_var=16.437761 # [(-0.005, 12.320714265009151), (0, 16.437761404765553), (0.005, 12.091319200153471)]
theta=0.001096  best_kick=0.000000  best_median_var=15.811388 # [(-0.005, 12.735776379946374), (0, 15.811388300841896), (0.005, 11.798304963002099)]
theta=0.001218  best_kick=0.000000  best_median_var=15.792403 # [(-0.005, 12.743625857659193), (0, 15.792403236999744), (0.005, 11.798304963002099)]
theta=0.000433  best_kick=0.000000  best_median_var=16.437761 # [(-0.005, 12.320714265009151), (0, 16.437761404765553), (0.005, 11.958260743101398)]
theta=0.000931  best_kick=0.000000  best_median_var=16.112107 # [(-0.005, 12.625371281669304), (0, 16.112107248898266), (0.005, 11.798304963002099)]
theta=0.001551  best_kick=0.000000  best_median_var=14.913082 # [(-0.005, 13.046072205840346), (0, 14.913081505845799), (0.005, 11.661903789690601)]
theta=0.001249  best_kick=0.000000  best_median_var=15.491933 # [(-0.005, 12.759310326189265), (0, 15.491933384829668), (0.005, 11.798304963002099)]
theta=0.002275  best_kick=0.000000  best_median_var=14.021412 # [(-0.005, 13.512956745287095), (0, 14.021412197064887), (0.005, 11.523888232710346)]
theta=0.002349  best_kick=0.000000  best_median_var=14.035669 # [(-0.005, 13.527749258468683), (0, 14.035668847618199), (0.005, 11.523888232710346)]
theta=0.002224  best_kick=0.000000  best_median_var=14.085453 # [(-0.005, 13.535139452550904), (0, 14.085453489327207), (0.005, 11.523888232710346)]
theta=0.002126  best_kick=0.000000  best_median_var=14.300346 # [(-0.005, 13.512956745287095), (0, 14.300345799157828), (0.005, 11.523888232710346)]
theta=0.003034  best_kick=-0.005000 best_median_var=14.615061 # [(-0.005, 14.615060725156088), (0, 13.274034804836093), (0.005, 11.41052146047673)]
theta=-0.003091 best_kick=0.005000  best_median_var=14.587666 # [(-0.005, 11.41052146047673), (0, 13.274034804836093), (0.005, 14.587666023048376)]
theta=0.001966  best_kick=0.000000  best_median_var=14.345731 # [(-0.005, 13.274034804836093), (0, 14.345731072343439), (0.005, 11.636150566231086)]
theta=0.002721  best_kick=-0.005000 best_median_var=14.021412 # [(-0.005, 14.021412197064887), (0, 13.512956745287095), (0.005, 11.523888232710346)]
theta=-0.002635 best_kick=0.005000  best_median_var=14.021412 # [(-0.005, 11.523888232710346), (0, 13.535139452550904), (0.005, 14.021412197064887)]
theta=0.002066  best_kick=0.000000  best_median_var=14.310835 # [(-0.005, 13.29661611087573), (0, 14.310835055998654), (0.005, 11.636150566231086)]
theta=0.001485  best_kick=0.000000  best_median_var=15.198684 # [(-0.005, 12.969194269498781), (0, 15.198684153570664), (0.005, 11.781341180018513)]
theta=0.001414  best_kick=0.000000  best_median_var=15.201973 # [(-0.005, 12.984606270503546), (0, 15.201973200284616), (0.005, 11.781341180018513)]
theta=0.000542  best_kick=0.000000  best_median_var=16.431676 # [(-0.005, 12.328828005937952), (0, 16.431675598153642), (0.005, 11.958260743101398)]
theta=0.000726  best_kick=0.000000  best_median_var=16.443844 # [(-0.005, 12.521980673998822), (0, 16.443843832875572), (0.005, 11.958260743101398)]
theta=0.000633  best_kick=0.000000  best_median_var=16.437761 # [(-0.005, 12.433824833895642), (0, 16.437761404765553), (0.005, 11.958260743101398)]
theta=-0.000171 best_kick=0.000000  best_median_var=16.437761 # [(-0.005, 12.116104984688768), (0, 16.437761404765553), (0.005, 12.255610959882823)]
theta=-0.000934 best_kick=0.000000  best_median_var=15.824032 # [(-0.005, 11.798304963002099), (0, 15.824032355881986), (0.005, 12.545915670049755)]
theta=-0.000398 best_kick=0.000000  best_median_var=16.440803 # [(-0.005, 11.958260743101398), (0, 16.440802618820562), (0.005, 12.320714265009151)]
theta=-0.001464 best_kick=0.000000  best_median_var=14.913082 # [(-0.005, 11.661903789690601), (0, 14.913081505845799), (0.005, 12.969194269498781)]
theta=-0.002141 best_kick=0.000000  best_median_var=14.310835 # [(-0.005, 11.532562594670797), (0, 14.310835055998654), (0.005, 13.512956745287095)]
theta=-0.002893 best_kick=0.005000  best_median_var=14.314328 # [(-0.005, 11.41052146047673), (0, 13.512956745287095), (0.005, 14.314328059637504)]
theta=0.003015  best_kick=-0.005000 best_median_var=14.314328 # [(-0.005, 14.314328059637504), (0, 13.274034804836093), (0.005, 11.41052146047673)]
theta=-0.002201 best_kick=0.000000  best_median_var=14.042792 # [(-0.005, 11.532562594670797), (0, 14.042791745233567), (0.005, 13.45362404707371)]
theta=-0.002234 best_kick=0.000000  best_median_var=14.042792 # [(-0.005, 11.523888232710346), (0, 14.042791745233567), (0.005, 13.512956745287095)]
theta=-0.001903 best_kick=0.000000  best_median_var=14.473666 # [(-0.005, 11.653325705565772), (0, 14.473665878659745), (0.005, 13.274034804836093)]
theta=-0.002782 best_kick=0.005000  best_median_var=14.085453 # [(-0.005, 11.41052146047673), (0, 13.520355024924458), (0.005, 14.085453489327207)]
theta=0.003083  best_kick=-0.005000 best_median_var=14.587666 # [(-0.005, 14.587666023048376), (0, 13.274034804836093), (0.005, 11.41052146047673)]
theta=-0.001439 best_kick=0.000000  best_median_var=15.491933 # [(-0.005, 11.661903789690601), (0, 15.491933384829668), (0.005, 12.961481396815721)]

The above simulation was done with:

noise = 0.001 # noise amplitude
kicks = [-5*noise, 0, 5*noise] # what kicks to try
ticks = 100 # now many time ticks to simulate
rounds = 1000 # now many rounds to simulate
dt = 0.1 # simulation rate koefficient, something like dt

I realise does not exactly follows the math in the original paper, but (rather inaccurately) uses its general idea.

abb
  • 141
  • 3