3

Seeing that as as far as we know, one half of your brain is logical and the other half of your brain is emotional, and that the wants of the emotional side are fed to the logical side in order to fulfill those wants; has there been any research done in connecting two separate neural networks to one another (one trained to be emotional, and one trained to be logical) to see if it would result in almost a free-will sort of "brain"?

I don't really know anything about neural networks except that they were modeled after the biological synapses in the human brain, which is why I ask.

I'm not even sure if this would be possible considering that even a trained neural network sometimes doesn't act logically (a.k.a. do what you thought you trained it to do).

Martijn Pieters
  • 1,048,767
  • 296
  • 4,058
  • 3,343
leeand00
  • 25,510
  • 39
  • 140
  • 297
  • 3
    How would you model emotions? – jamieb Feb 17 '10 at 20:36
  • I liked the question, though I have no idea if it makes sense for an expert – Samuel Carrijo Feb 17 '10 at 20:42
  • @jamieb I dunno, I would think you would have to have that somewhat pre-modeled. If an organism knows something is bad for it, it's body already tells it this though a sensation like pain. Emotions are generally read operations aren't they? They either have to be read from a sensation or a stored memory, that consequently either makes us feel negative or positive, but that's putting it too simplistically, sometimes you can feel positive and negative at the same time. So maybe I'm asking the wrong question here since nobodys figured out how to emulate the emotional side of the brain yet – leeand00 Feb 17 '10 at 20:44
  • 1
    I don't think that "one half of your brain is logical and the other half of your brain is emotional" is a supportable thesis. And aren't two connected neural networks effectively one neural network? – Eric Mickelsen Feb 17 '10 at 20:49
  • ...and sometimes you will perceive something in a negative way that you used to percieve in a positive way (or vice versa). The environment and the objects in question may not have changed, but your response/reaction to them could change with newly acquired/reinterpreted knowledge. – FrustratedWithFormsDesigner Feb 17 '10 at 20:49
  • I can't comment on the subject of the question, but I'm tempted to upvote because it's a great example of the 'correct' use of spaces *before* parentheses. *So rare these days* – pavium Feb 17 '10 at 20:50
  • @tehMick I think what I'm trying to get at is what it is that keeps computers from driving themselves as it were. – leeand00 Feb 17 '10 at 21:30
  • Artificial neural networks are best thought of as fancy function approximators that are at best coarse approximations to what part of a brain *might* do. For one thing, the typical training procedure (gradient descent via backpropagation) is completely biologically implausible, since there's no biological mechanism through which information about the gradient of the function to flow backward along a synapse. – dwf Apr 30 '10 at 20:56
  • 1
    This question appears to be off-topic because it is about AI theory, not a practical programming problem you are facing. – Martijn Pieters Nov 08 '13 at 16:43

4 Answers4

3

First, most modern neural networks aren't really modeled after biological synapses. They use an Artificial Neuron which allowed Back Propagation to work rather than a Perceptron which is a much more accurate representation.

When you feed the output of one network into the input of another network, you've really just created one larger network, not two separate networks. It just happens that in this case portions of the networks would be trained independently.

That said, all neural networks have to be trained. Which means you need sample input and sample output. You are looking to create a decision engine of sorts I suppose. So you would need to create a dataset where it makes sense that there might be an emotional and rational response, such as purchasing an item. You'd have to train the 'rational' network to accept as a set of inputs the output of an 'emotional' network. Which means you are really just training the rational decision engine to be responsive based on the leve of 'distress' caused by the emotional network.

Just my two cents.

Tim Bender
  • 20,112
  • 2
  • 49
  • 58
1

I have also heard of one hemisphere being called "divergent" and one "convergent". This may not make any more sense than emotional vs logical, but it does hint at how you might model it more easily. I don't know how the brain achieves some of the impressive computational feats it does, but I wouldn't be very surprised if all revolved around balance, but maybe that is just one of the baises you have when you are a brain with two hemipheres (or any even number) :D

A balance between convergence and divergence is the crux of the creativity inherent in evolution. Replicating this with neural nets sounds promising to me. Suppose you make one learning system that generalizes and keeps representations of only the typical groups of patterns it is shown. Then you take another and make it generate all the in-betweens and mutants of the patterns it is shown. Then you feed them to eachother in a circle, and poof, you have made something really interesting!

Nathan
  • 6,095
  • 10
  • 45
  • 54
  • That makes me wonder at what point they become abstractions, the "mutants" that you spoke of...could be similar to the set patterns, but slightly different. – leeand00 Jun 29 '10 at 02:24
  • What does 'abstraction' mean to you? I suspect yours is a mutant definition. – Nathan Jun 29 '10 at 17:38
  • Abstraction in the programming sense. – leeand00 Jun 30 '10 at 11:51
  • The convergent half of the system is responsible for turning the mutants into abstractions. In clearer words: it is responsible for abstracting salient categories from a set of examples provided by the divergent half of the system. If abstraction is something more than categorization, or something more is required of a mind-like system than categorization, then this system is only an incomplete mind. For a taste of what might be required to complete it, see http://pespmc1.vub.ac.be/mstt.html – Nathan Jul 06 '10 at 14:02
1

It's even more complex than that, unbelievably. The left hemisphere works on a set of logical rules, it uses these to predict its environment and categorize input. It also infers rules and stores them for future use. The right hemisphere is based, as you said, on emotion, but also on memory of single, unique or emotionally relevant occurrences. A software implementation should also be able to retrieve and store these two data types and exchange "opinions" about them.

Ian
  • 1
  • 1
0

While the left hemisphere of the brain may be more involved in making emotional decisions, emotion itself is unlikely to occur exclusively in one side of the brain, and the interplay between emotions and rational thought within the brain is likely to be substantially more complex than having two completely separate circuits. For instance, a study on rhesus macaques found that dopamine and other hormones associated with emotional responses essentially implements temporal difference learning within the brain (I'm still looking for a link to it). This suggests that separating emotional and rational thought into two separate neural networks probably wouldn't be practical, even if we had the resources to build neural networks on the scale of brain hemispheres (which we don't, or at least not within most research budgets).

This idea is supported by Sloman and Croucher's suggestion that emotion will likely be an unavoidable emergent property of a sufficiently advanced intelligent system. Such systems (discussed in detail in the paper) will be much more complex than straight-up neural nets. More importantly, though, the emotions won't be something that you can localize to one part of the system.

seaotternerd
  • 6,298
  • 2
  • 47
  • 58