0

How to find values of weight1, weight2, and bias? What's generalized mathematical way to find these 3 values for any problem!

import pandas as pd


weight1 = 0.0
weight2 = 0.0
bias = 0.0

test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]
correct_outputs = [False, False, False, True]
outputs = []

for test_input, correct_output in zip(test_inputs, correct_outputs):
    linear_combination = weight1 * test_input[0] + weight2 * test_input[1] + bias
    output = int(linear_combination >= 0)
    is_correct_string = 'Yes' if output == correct_output else 'No'
    outputs.append([test_input[0], test_input[1], linear_combination, output, is_correct_string])


num_wrong = len([output[4] for output in outputs if output[4] == 'No'])
output_frame = pd.DataFrame(outputs, columns=['Input 1', '  Input 2', '  Linear Combination', '  Activation Output', '  Is Correct'])
if not num_wrong:
    print('Nice!  You got it all correct.\n')
else:
    print('You got {} wrong.  Keep trying!\n'.format(num_wrong))
print(output_frame.to_string(index=False))
Rahul Vansh
  • 171
  • 2
  • 13

5 Answers5

1

The problem asks you to evaluate weight1, weight2, and bias when your inputs are [(0,0), (0,1), (1,0), (1,1)] in order to produce [False, False, False, True]. 'False' in this context would be a result that is a negative number. In contrast, 'True' would be a result that is a positive number. So, you evaluate the following:

x1*weight1 + x2*weight2 + bias' is positive or negative

For example, setting weight1=1, weight2=1, and bias=-1.1 (possible solution) you get for the first input:

0*1 + 0*1 + (-1.1) = -1.1 which is negative, meaning it evaluates to False

for the next input:

0*1 + 1*1 + (-1.1) = -0.1 which is negative, meaning it evaluates to False

for the next input:

1*1 + 0*1 + (-1.1) = -0.1 which is negative, meaning it evaluates to False

and for the last input:

1*1 + 1*1 + (-1.1) = +0.9 which is positive, meaning it evaluates to True

alexv
  • 55
  • 1
  • 7
1

The following also worked for me:

weight1 = 1.5
weight2 = 1.5
bias = -2
0

Well in the case of the normal equations, you do not need a bias unit. Therefore, this may be what you are after (keep in mind I have recast your True and False values to 1 and 0, respectively):

import numpy as np

A = np.matrix([[0, 0], [0, 1], [1, 0], [1, 1]])
b = np.array([[0], [0], [0], [1]])

x = np.linalg.inv(np.transpose(A)*A)*np.transpose(A)*b

print(x)

Yields:

[[ 0.33333333]
 [ 0.33333333]]

Further details on the solution are given here.

rahlf23
  • 8,869
  • 4
  • 24
  • 54
0

The following worked for me:

weight1 = 1.5
weight2 = 1.5
bias = -2

Will update when I better understand why

Konrad
  • 631
  • 6
  • 6
0

X1w1 + X2W2 + bias The test is:

linear_combination >= 0

from the given input values:

test_inputs = [(0, 0), (0, 1), (1, 0), (1, 1)]

The AND value computes to true on the test only once, hence the output of a typical AND operation should be given as:

1   1   True 
1   0   False
0   1   False
0   0   False

Given, when we input the test inputs in the equation: X1w1 + X2W2 + bias, there should only be one true outcome. As noted above, our test is that the linear combinations of the equations should be greater or equal to zero. I believe what the question is looking for is for the this output to be true only one, as seen from the test run. To get the false value, therefore, the output should be a negative computation. The easiest way is to test the equation with small values, and a negative bias. I tried

weight1 = 1
weight2 = 1
bias = -2