4

Given:

let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]

what I want is:
wX = [(0.5)*[2;3;4];(0.4)*[7;3;2];(0.3)*[5;3;6]]
would like to know an elegant way to do this with lists as well as with arrays. Additional optimization information is welcome

Guy Coder
  • 24,501
  • 8
  • 71
  • 136
D.S.
  • 289
  • 2
  • 13
  • 1
    Did you mean 0.5 * [2 3 4]? – VoronoiPotato Feb 01 '17 at 22:32
  • yes, the formatting removed stars – D.S. Feb 01 '17 at 22:35
  • 1
    '[2,3,4;7,3,2;5,3,6]' is not a list of lists. It is a list of 3-tuples, `int*int*int` – Anton Schwaighofer Feb 01 '17 at 22:42
  • 1
    Is this for a neural network? If so you might want to look at using the matrix functions of [MathNet Numerics](https://numerics.mathdotnet.com/Matrix.html) – Guy Coder Feb 02 '17 at 00:03
  • 1
    I snuck a peek at your profile and saw you are working with machine learning. Be aware that a lot of neural networks is done with Python and Python has [duck typing](https://en.wikipedia.org/wiki/Duck_typing) which makes using the different size matricies easy to work with in Python, but harder in F#. As such you might be interested in [Returning arrays of different dimensions from one function; is it possible in F#?](http://stackoverflow.com/questions/34599909/returning-arrays-of-different-dimensions-from-one-function-is-it-possible-in-f) – Guy Coder Feb 02 '17 at 00:13
  • 1
    When you get to the sigmoid function you might want to look at [MathNet Raise Scalar by a Matrix](http://stackoverflow.com/a/37594281/1243762) – Guy Coder Feb 02 '17 at 00:16
  • When you load the raw data I found [Array.blit](https://msdn.microsoft.com/en-us/visualfsharpdocs/conceptual/array.blit%5B't%5D-function-%5Bfsharp%5D) to be the sharpest tool in the toolbox. – Guy Coder Feb 02 '17 at 00:22
  • @GuyCoder thanks for your input and extra attention to detail, I'm just getting into F#, especially because I work for a company adamant about not using Python in production. It is nice to learn such nuances between how F# is different from Python, looks like I need some in-depth reading. And yes I was trying to see if I can write a basic neural network in F# to understand the language better. – D.S. Feb 02 '17 at 00:38
  • As you are aware, doing production neural networks without the GPU or something of similar power is insane. If you do use a library to help you, check to make sure it can use the GPU if you need that. Many don't mention it and you may not find out until you have wasted valuable time. – Guy Coder Feb 02 '17 at 00:45
  • Mathnet Numerics also has some nice random number generator functions. I only used F# to learn how the algorithms and concepts work and it was well worth the effort. Now that TensorFlow works on Windows with the GPU I will be getting back to using it. – Guy Coder Feb 02 '17 at 00:47

4 Answers4

3

You write about a list of lists, but your code shows a list of tuples. Taking the liberty to adjust for that, a solution would be

let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]
X
|> List.map2 (fun w x -> 
    x 
    |> List.map (fun xi -> 
        (float xi) * w
    )
) weights

Depending on how comfortable you are with the syntax, you may prefer a oneliner like

List.map2 (fun w x -> List.map (float >> (*) w) x) weights X

The same library functions exist for sequences (Seq.map2, Seq.map) and arrays (in the Array module).

Anton Schwaighofer
  • 3,119
  • 11
  • 24
  • 1
    List.map2 FTW! :) – s952163 Feb 01 '17 at 23:29
  • thanks for correcting my question, also nice to see dual approaches, you have answered my question twice :) – D.S. Feb 01 '17 at 23:38
  • on a side note, I was wondering if there is a library that makes it easy to perform standard matrix like operations like scaling, linear sum etc much like R or Matlab, this approach seems very complex, takes the focus away from what to how. – D.S. Feb 01 '17 at 23:41
  • 1
    MSDN has an overview list [here](https://msdn.microsoft.com/en-us/library/hh304368(v=vs.100).aspx) The post is from 2010, but some of the libraries are still around. I hear good things about [Math.NET Numeric](https://numerics.mathdotnet.com/) but have not used it myself. – Anton Schwaighofer Feb 01 '17 at 23:46
3

This is much more than an answer to the specific question but after a chat in the comments and learning that the question was specifically a part of a neural network in F# I am posting this which covers the question and implements the feedforward part of a neural network. It makes use of MathNet Numerics

This code is an F# translation of part of the Python code from Neural Networks and Deep Learning.

Python

def backprop(self, x, y):
    """Return a tuple ``(nabla_b, nabla_w)`` representing the
    gradient for the cost function C_x.  ``nabla_b`` and
    ``nabla_w`` are layer-by-layer lists of numpy arrays, similar
    to ``self.biases`` and ``self.weights``."""
    nabla_b = [np.zeros(b.shape) for b in self.biases]
    nabla_w = [np.zeros(w.shape) for w in self.weights]
    # feedforward
    activation = x
    activations = [x] # list to store all the activations, layer by layer
    zs = [] # list to store all the z vectors, layer by layer
    for b, w in zip(self.biases, self.weights):
        z = np.dot(w, activation)+b
        zs.append(z)
        activation = sigmoid(z)
        activations.append(activation)

F#

module NeuralNetwork1 =

    //# Third-party libraries
    open MathNet.Numerics.Distributions         // Normal.Sample
    open MathNet.Numerics.LinearAlgebra         // Matrix

    type Network(sizes : int array) = 

        let mutable (_biases : Matrix<double> list) = []
        let mutable (_weights : Matrix<double> list) = []    

        member __.Biases
            with get() = _biases
            and set value = 
                _biases <- value
        member __.Weights
            with get() = _weights
            and set value = 
                _weights <- value

        member __.Backprop (x : Matrix<double>) (y : Matrix<double>) =
            // Note: There is a separate member for feedforward. This one is only used within Backprop 
            // Note: In the text layers are numbered from 1 to n   with 1 being the input and n   being the output
            //       In the code layers are numbered from 0 to n-1 with 0 being the input and n-1 being the output
            //       Layers
            //         1     2     3    Text
            //         0     1     2    Code
            //       784 -> 30 -> 10
            let feedforward () : (Matrix<double> list * Matrix<double> list) =
                let (bw : (Matrix<double> * Matrix<double>) list) = List.zip __.Biases __.Weights
                let rec feedfowardInner layer activation zs activations =
                    match layer with
                    | x when x < (__.NumLayers - 1) ->
                        let (bias, weight) = bw.[layer]
                        let z = weight * activation + bias
                        let activation = __.Sigmoid z
                        feedfowardInner (layer + 1) activation (z :: zs) (activation :: activations)
                    | _ -> 
                        // Normally with recursive functions that build list for returning
                        // the final list(s) would be reversed before returning.
                        // However since the returned list will be accessed in reverse order
                        // for the backpropagation step, we leave them in the reverse order.
                        (zs, activations)
                feedfowardInner 0 x [] [x]

In weight * activation * is an overloaded operator operating on Matrix<double>

Related back to your example data and using MathNet Numerics Arithmetics

let weights = [0.5;0.4;0.3]
let X = [[2;3;4];[7;3;2];[5;3;6]]

first the values for X need to be converted to float

let x1 = [[2.0;3.0;4.0];[7.0;3.0;2.0];[5.0;3;0;6;0]]

Now notice that x1 is a matrix and weights is a vector

so we can just multiply

 let wx1 = weights * x1

Since the way I validated the code was a bit more than most I will explain it so that you don't have doubts to its validity.

When working with Neural Networks and in particular mini-batches, the starting numbers for the weights and biases are random and the generation of the mini-batches is also done randomly.

I know the original Python code was valid and I was able to run it successfully and get the same results as indicated in the book, meaning that the initial successes were within a couple of percent of the book and the graphs of the success were the same. I did this for several runs and several configurations of the neural network as discussed in the book. Then I ran the F# code and achieved the same graphs.

I also copied the starting random number sets from the Python code into the F# code so that while the data generated was random, both the Python and F# code used the same starting numbers, of which there are thousands. I then single stepped both the Python and F# code to verify that each individual function was returning a comparable float value, e.g. I put a break point on each line and made sure I checked each one. This actually took a few days because I had to write export and import code and massage the data from Python to F#.

See: How to determine type of nested data structures in Python?

I also tried a variation where I replaced the F# list with Linked list, but found no increase in speed, e.g. LinkedList<Matrix<double>>. Was an interesting exercise.

Community
  • 1
  • 1
Guy Coder
  • 24,501
  • 8
  • 71
  • 136
  • Do you use __.Foo as a convention for something? – D.S. Feb 02 '17 at 02:16
  • @D.S. No, `__.Foo` is an instance of a member. See: [F# instance syntax](http://stackoverflow.com/a/3913685/1243762) I rarely use members with my functional code, but I did so here because when I translate I like the code to be as close to the original as possible. I also avoid mutables like the plague, but here have thousands of mutable as part of the matrices. It's like poetry, you are not allowed to break the rules until you understand the rules. – Guy Coder Feb 02 '17 at 02:22
  • @D.S. As I did this a year ago and the book is a living book, it looks like some of the book has changed. I will have to re-read it to see what has been updated. – Guy Coder Feb 02 '17 at 02:28
  • @D.S. I added more of the code to put the instances into context. – Guy Coder Feb 02 '17 at 03:02
2

If I understand correctly,

let wX = weights |> List.map (fun w ->
    X |> List.map (fun (a, b, c) ->
        w * float a,
        w * float b,
        w * float c))
ildjarn
  • 62,044
  • 9
  • 127
  • 211
1

This is an alternate way to achieve this using Math.Net: https://numerics.mathdotnet.com/Matrix.html#Arithmetics

D.S.
  • 289
  • 2
  • 13
  • I know it is hard to tell, but if you click on the `*` in my answer after `weight * activation` you will see that it is a link to what you just posted. – Guy Coder Feb 02 '17 at 11:32
  • Aah almost missed that one. Your extension is an amazing explanation btw. Thank you so much. – D.S. Feb 02 '17 at 12:42
  • Thanks. I plan to post the entire translation of the book to F# on GitHub but I still have to do the convolutional neural network and translating that to F# is a bit more tricky than the first part. Since the exercise was to understand neural networks at the low level and I achieved my goal, I just moved on. Never really appreciated duck typing until doing these translations. Put it on hold and it has been there for over a year now. – Guy Coder Feb 02 '17 at 13:12
  • Another fun thing I did with it was to use symbols for identifiers, `let ``∇b`` = ``Δ`` :: ``∇b`` ` but the feedback is that people would rather have names. – Guy Coder Feb 02 '17 at 13:13