0

I am reading Jang's book of Neuro-Fuzzy and Soft Computing and in the 2nd chapter the author talks about Schweizer and Sklar T-Norm which is presented by this equation: The Schweizer and Sklar T-Norm

it's a handy T-norm. in the exercises (#20 page 45) it asks what would happen to the Tss(a,b,p) if p->0 In fact it asks to show that the whole equation is going to be just ab in the end. I tried different things and at last, I used Ln but I got this: -1/p Ln(a^-p + b^-p) and I have no idea where to go from here!

can anybody suggest anything? thanks for your help.

p.s: is there any simple way of expanding Ln(x+y) generally?

Babakslt
  • 199
  • 1
  • 10
  • 1
    Look what happens to the exponent as `p->0`. What happens to an expression that has such an exponent? – CoconutBandit Oct 31 '16 at 13:57
  • You may want to try one of the [math stack exchanges](http://stackexchange.com/sites#science), this doesn't have a whole lot to do with programming per se. – deceze Oct 31 '16 at 13:59
  • @deceze sorry I though maybe cause it's in an artificial intelligence textbook, this might be a good place for asking. – Babakslt Oct 31 '16 at 14:03
  • @CoconutBandit maybe i left something behind! it asks to show that the whole thing is going to be the product of `a` and `b`. `ab` – Babakslt Oct 31 '16 at 14:04
  • Call us when you're about to *implement* an artificial intelligence in a concrete programming language… :) – deceze Oct 31 '16 at 14:05
  • @deceze thank you and thanks for your advice. I will ask it on math stack if i can't find the answer. – Babakslt Oct 31 '16 at 14:07
  • 1
    Sorry, I actually misread your question. You need to use the fact that `x = e^{Ln(x)}`. Dont look at the limit of the log-space. Also, wolframalpha would probably help here – CoconutBandit Oct 31 '16 at 14:28

0 Answers0