2

The following code shows the problem I am facing: '''

def fakeDataGenerator(chanNum=31):
#This function generates the data I want to recover and it shows the characters of the data I am working on. It's continuous and differentiable.
    peaks = random.sample(range(chanNum), random.choice(range(3,10)))
    peaks.append(chanNum)
    peaks.sort()
    out = [random.choice(range(-5, 5))]
    delta = 1
    while len(out) < chanNum:
        if len(out) < peaks[0]:
            out.append(out[-1]+delta)
        elif len(out) == peaks[0]:
            delta *= -1
            peaks.pop(0)
    return out

originalData = torch.tensor(fakeDataGenerator(31)).reshape(1, 31).float()

encoder = torch.rand((31, 9)).float() #encoder here is something that messed the data up
code = torch.matmul(originalData, encoder) #here we get the code which is messed up by the encoder

decoder = torch.pinverse(encoder) #We can make use of the encoder matrix to decode the data.
#For example, here I apply pinverse to recover the data, but...

decoded = torch.matmul(code, decoder)
print(decoded - originalData)  #the result is no good.

Can I make use of the characteristics of the original data and the encoder to better recover the original data? The environment this program working on doesn't allow complicated models such as neural networks.

Irrawa
  • 23
  • 5

0 Answers0