-1

I wanted to integrate S3 and event bridge trigger for lambda function to stored the particular logs from cloud watch to SQL Server management database so for that I'm using Pyodbc library.

File name: lambda_function.py

import json
import os
import pyodbc
import urllib.parse

def lambda_handler(event, context):
    try:
        # Extract S3 event details
        s3_event = event['Records'][0]['s3']
        bucket_name = s3_event['bucket']['name']
        object_key = urllib.parse.unquote_plus(s3_event['object']['key'])
        file_length = s3_event['object']['size']
        
        # Extract additional fields from the record (if available)
        record_fields = event.get('additionalFields', {})
        provider_id = record_fields.get('providerID', None)
        created_at = record_fields.get('createdAt', None)
        updated_at = record_fields.get('updatedAt', None)

        # Store file details locally
        source_path = 's3://' + bucket_name + '/' + object_key
        file_name = os.path.basename(object_key)

        # Connect to the Microsoft SQL database
        db_server = '**********'
        db_name = '**********'
        db_username = '*******'
        db_password = '********'

        conn_str = f'DRIVER={{ODBC Driver 17 for SQL Server}};SERVER={db_server};DATABASE={db_name};UID={db_username};PWD={db_password}'
        conn = pyodbc.connect(conn_str)
        cursor = conn.cursor()

        # Insert/update file details in the database
        sql_query = "INSERT INTO RawRecording_temp (providerID, createdAt, updatedAt, sourcepath, fileName, fileSize) VALUES (?, ?, ?, ?, ?, ?) ON DUPLICATE KEY UPDATE fileName = VALUES(fileName)"
        cursor.execute(sql_query, provider_id, created_at, updated_at, source_path, file_name, file_length)
        conn.commit()

        # Close the database connection
        cursor.close()
        conn.close()

        return {
            'statusCode': 200,
            'body': json.dumps('File details was inserted/updated in the database successfully!')
        }
    except Exception as e:
        return {
            'statusCode': 500,
            'body': json.dumps(f'Error: {str(e)}')
        }

However, I receive this error:

Response
{
  "statusCode": 500,
  "body": "\"Error: module 'pyodbc' has no attribute 'connect'\""
}

Function Logs
START RequestId: 57ccd513-4c38-4939-8c53-edfe73540f4a Version: $LATEST
END RequestId: 57ccd513-4c38-4939-8c53-edfe73540f4a
REPORT RequestId: 57ccd513-4c38-4939-8c53-edfe73540f4a  Duration: 1.16 ms   Billed Duration: 2 ms   Memory Size: 128 MB Max Memory Used: 36 MB  Init Duration: 92.36 ms

I have checked the handler filed its like this "lambda_function.lambda_handler" and project structure I'm maintaining:

lambda_function.zip
├── lambda_function
│   └── lambda_function.py
└── lambda-packages
    ├── pyodbc
    └── other_dependency

I even tried by deploying using terraform and Vscode by packaging the things but ends up with same error.

1 Answers1

1

You should modify your lambda function setting and set proper path to the handler. Since your lambda_function.py is inside folder lambda_function, the full path should be:

lambda_function/lambda_function.lambda_handler
Marcin
  • 215,873
  • 14
  • 235
  • 294
  • I have tried the suggested changes in handler filed ,but got up with same error . "Unable to import module 'lambdA_function/lambda_function': No module named 'lambdA_function'", – Fitness Freak Aug 11 '23 at 12:25
  • @FitnessFreak why the capital A in `lambdA_function`? – jarmod Aug 11 '23 at 16:40
  • @FitnessFreak My answer does not have `lambdA_function` nor your folder is called that way. – Marcin Aug 11 '23 at 22:49