This is not how lambda is generally used in the machine learning pipeline. Usually, you would use lambda to do some data formatting and pass the result to a SageMaker endpoint that is hosting a trained model. But for some very small model, I guess that you can try lambda as your backend.
First, whether you are using lambda or any backend, you most likely don't want to train the model each time your endpoint is invoked, you just want to do the inference (unless we are specifically talking about online learning).
So, train your model offline as you would normally and then use that trained model for inference. Logistic regression is quite a simple algorithm (at least the inference part of it), therefore you can just extract the relevant parameters and "hard-code" them into the lambda function with appropriate inference logic.
Here is how you can extract the model's coefficients and intercept.
# This is the offline part
classifier = LogisticRegression(random_state = 0)
classifier.fit(X_train, y_train)
coef, intercept = clf.coef_, clf.intercept_
And use these in your lambda function. Here is how you might implement it (I am using numpy
here, but feel free to implement it however you like). Also, I am omitting the lambda's boilerplate code.
# This goes into your lambda function
coef, intercept = # hardcoded parameters here
def sigmoid(X):
return 1 / (1 + np.exp(-X))
def predict(X, intercept, coef):
return sigmoid(np.dot(X, coef.T) + intercept)
# compute prediction
predict(X[0], intercept, coef)