169

Any help on this problem will be greatly appreciated.

So basically I want to run a query to my SQL database and store the returned data as Pandas data structure.

I have attached code for query.

I am reading the documentation on Pandas, but I have problem to identify the return type of my query.

I tried to print the query result, but it doesn't give any useful information.

Thanks!!!!

from sqlalchemy import create_engine

engine2 = create_engine('mysql://THE DATABASE I AM ACCESSING')
connection2 = engine2.connect()
dataid = 1022
resoverall = connection2.execute("
    SELECT 
       sum(BLABLA) AS BLA,
       sum(BLABLABLA2) AS BLABLABLA2,
       sum(SOME_INT) AS SOME_INT,
       sum(SOME_INT2) AS SOME_INT2,
       100*sum(SOME_INT2)/sum(SOME_INT) AS ctr,
       sum(SOME_INT2)/sum(SOME_INT) AS cpc
    FROM daily_report_cooked
    WHERE campaign_id = '%s'",
    %dataid
)

So I sort of want to understand what's the format/datatype of my variable "resoverall" and how to put it with PANDAS data structure.

Paul P
  • 3,346
  • 2
  • 12
  • 26
user1613017
  • 1,703
  • 2
  • 11
  • 8

18 Answers18

175

Here's the shortest code that will do the job:

from pandas import DataFrame
df = DataFrame(resoverall.fetchall())
df.columns = resoverall.keys()

You can go fancier and parse the types as in Paul's answer.

Daniel
  • 26,899
  • 12
  • 60
  • 88
  • 3
    This worked for me for 1.000.000 records fecthed from an Oracle database. – precise Oct 14 '18 at 07:39
  • 16
    `df = DataFrame(cursor.fetchall())` returns `ValueError: DataFrame constructor not properly called!`, it appears that the tuple of tuples is not acceptable for DataFrame constructor. There is also no `.keys()` on cursor either in dictionary or tuple mode. – Mobigital Jan 20 '19 at 20:45
  • 5
    Just note that the keys method will only work with results obtained using sqlalchemy. Pyodbc uses the description attribute for columns. – Filippos Zofakis Aug 29 '19 at 21:06
  • Can this work for Postgres databases? I am trying to get column names for the result dataframe with `keys()` function but can't get it to work. – Bowen Liu Feb 20 '20 at 01:09
  • 6
    @BowenLiu Yes, you can use with psycopg2 `df.columns=[ x.name for x in recoverall.description ]` – Gnudiff Feb 28 '20 at 11:36
  • I'm getting the same error as @Mobigital, can this work for SQLite databases? I'm trying to get column names for the result dataframe but can't get it to work. – Guilherme Matheus Feb 17 '21 at 13:45
  • 2
    Why does "df.columns = resoverall.keys()" not work? – user398843 Jul 04 '22 at 06:45
171

Edit: Mar. 2015

As noted below, pandas now uses SQLAlchemy to both read from (read_sql) and insert into (to_sql) a database. The following should work

import pandas as pd

df = pd.read_sql(sql, cnxn)

Previous answer: Via mikebmassey from a similar question

import pyodbc
import pandas.io.sql as psql
    
cnxn = pyodbc.connect(connection_info) 
cursor = cnxn.cursor()
sql = "SELECT * FROM TABLE"
    
df = psql.frame_query(sql, cnxn)
cnxn.close()
Paul P
  • 3,346
  • 2
  • 12
  • 26
beardc
  • 20,283
  • 17
  • 76
  • 94
  • This seems to be the best way to do it, as you don't need to manually use .keys() to get the column index. Probably Daniel's answer was written before this method existed. You can also use pandas.io.sql.read_frame() – RobinL Oct 13 '13 at 13:54
  • 1
    @openwonk where would implement `pd.read_sql()` in the code snippet above? – 3kstc Aug 15 '17 at 06:12
  • Actually, since my last response, I've used `pyodbc` and `pandas` together quite a bit. Adding new answer with example, FYI. – openwonk Aug 16 '17 at 15:50
  • hello. what is cursor used for in your example, please? – tagoma Oct 06 '22 at 15:23
43

If you are using SQLAlchemy's ORM rather than the expression language, you might find yourself wanting to convert an object of type sqlalchemy.orm.query.Query to a Pandas data frame.

The cleanest approach is to get the generated SQL from the query's statement attribute, and then execute it with pandas's read_sql() method. E.g., starting with a Query object called query:

df = pd.read_sql(query.statement, query.session.bind)
Nathan Gould
  • 7,995
  • 2
  • 17
  • 15
  • 8
    A more efficient approach is to get the statement from sqlalchemy and let pandas do the query itself with `pandas.read_sql_query`, passing `query.statement` to it. See this answer: http://stackoverflow.com/a/29528804/1273938 – LeoRochael Jul 31 '15 at 19:19
  • Thanks @LeoRochael! I edited my answer. Definitely cleaner! – Nathan Gould Nov 12 '17 at 23:56
23

Edit 2014-09-30:

pandas now has a read_sql function. You definitely want to use that instead.

Original answer:

I can't help you with SQLAlchemy -- I always use pyodbc, MySQLdb, or psychopg2 as needed. But when doing so, a function as simple as the one below tends to suit my needs:

import decimal

import pyodbc #just corrected a typo here
import numpy as np
import pandas

cnn, cur = myConnectToDBfunction()
cmd = "SELECT * FROM myTable"
cur.execute(cmd)
dataframe = __processCursor(cur, dataframe=True)

def __processCursor(cur, dataframe=False, index=None):
    '''
    Processes a database cursor with data on it into either
    a structured numpy array or a pandas dataframe.

    input:
    cur - a pyodbc cursor that has just received data
    dataframe - bool. if false, a numpy record array is returned
                if true, return a pandas dataframe
    index - list of column(s) to use as index in a pandas dataframe
    '''
    datatypes = []
    colinfo = cur.description
    for col in colinfo:
        if col[1] == unicode:
            datatypes.append((col[0], 'U%d' % col[3]))
        elif col[1] == str:
            datatypes.append((col[0], 'S%d' % col[3]))
        elif col[1] in [float, decimal.Decimal]:
            datatypes.append((col[0], 'f4'))
        elif col[1] == datetime.datetime:
            datatypes.append((col[0], 'O4'))
        elif col[1] == int:
            datatypes.append((col[0], 'i4'))

    data = []
    for row in cur:
        data.append(tuple(row))

    array = np.array(data, dtype=datatypes)
    if dataframe:
        output = pandas.DataFrame.from_records(array)

        if index is not None:
            output = output.set_index(index)

    else:
        output = array

    return output
Paul H
  • 65,268
  • 20
  • 159
  • 136
  • I think you need to import decimal somewhere up top? – joefromct Sep 28 '15 at 19:01
  • @joefromct Perhaps, but this answer is so obsolete I really should just strike the whole thing and shows the pandas methods. – Paul H Sep 28 '15 at 19:08
  • It may be relevent for some... the reason i was studying this was because of my other issue, using read_sql() here http://stackoverflow.com/questions/32847246/pandas-cdecimal-conversionsyntax-invalidoperation-on-read-sql-query – joefromct Sep 29 '15 at 14:57
  • It's relevant for those who can't use SQLAlchemy which doesn't support all databases. – lamecicle Feb 28 '19 at 16:30
  • @lamecicle somewhat disagree. IIRC, `read_sql` can still accept non-SQLAlchemy connections through e.g., pyodbc, psychopg2, etc – Paul H Feb 28 '19 at 17:21
  • @PaulH After looking into it, I originally had some bad advice and instead looked for good facts and you're right, looks like they cover everything I could think of. Use `read_sql` people! – lamecicle Mar 01 '19 at 10:14
22

1. Using MySQL-connector-python

# pip install mysql-connector-python

import mysql.connector
import pandas as pd

mydb = mysql.connector.connect(
    host = 'host',
    user = 'username',
    passwd = 'pass',
    database = 'db_name'
)
query = 'select * from table_name'
df = pd.read_sql(query, con = mydb)
print(df)

2. Using SQLAlchemy

# pip install pymysql
# pip install sqlalchemy

import pandas as pd
import sqlalchemy

engine = sqlalchemy.create_engine('mysql+pymysql://username:password@localhost:3306/db_name')

query = '''
select * from table_name
'''
df = pd.read_sql_query(query, engine)
print(df)
Community
  • 1
  • 1
Lintang Wisesa
  • 619
  • 10
  • 14
19

MySQL Connector

For those that works with the mysql connector you can use this code as a start. (Thanks to @Daniel Velkov)

Used refs:


import pandas as pd
import mysql.connector

# Setup MySQL connection
db = mysql.connector.connect(
    host="<IP>",              # your host, usually localhost
    user="<USER>",            # your username
    password="<PASS>",        # your password
    database="<DATABASE>"     # name of the data base
)   

# You must create a Cursor object. It will let you execute all the queries you need
cur = db.cursor()

# Use all the SQL you like
cur.execute("SELECT * FROM <TABLE>")

# Put it all to a data frame
sql_data = pd.DataFrame(cur.fetchall())
sql_data.columns = cur.column_names

# Close the session
db.close()

# Show the data
print(sql_data.head())
Thomas Devoogdt
  • 816
  • 11
  • 16
9

Here's the code I use. Hope this helps.

import pandas as pd
from sqlalchemy import create_engine

def getData():
  # Parameters
  ServerName = "my_server"
  Database = "my_db"
  UserPwd = "user:pwd"
  Driver = "driver=SQL Server Native Client 11.0"

  # Create the connection
  engine = create_engine('mssql+pyodbc://' + UserPwd + '@' + ServerName + '/' + Database + "?" + Driver)

  sql = "select * from mytable"
  df = pd.read_sql(sql, engine)
  return df

df2 = getData()
print(df2)
Murali Bala
  • 1,133
  • 2
  • 18
  • 28
9

This is a short and crisp answer to your problem:

from __future__ import print_function
import MySQLdb
import numpy as np
import pandas as pd
import xlrd

# Connecting to MySQL Database
connection = MySQLdb.connect(
             host="hostname",
             port=0000,
             user="userID",
             passwd="password",
             db="table_documents",
             charset='utf8'
           )
print(connection)
#getting data from database into a dataframe
sql_for_df = 'select * from tabledata'
df_from_database = pd.read_sql(sql_for_df , connection)
DeshDeep Singh
  • 1,817
  • 2
  • 23
  • 43
5

Like Nathan, I often want to dump the results of a sqlalchemy or sqlsoup Query into a Pandas data frame. My own solution for this is:

query = session.query(tbl.Field1, tbl.Field2)
DataFrame(query.all(), columns=[column['name'] for column in query.column_descriptions])
Janak Mayer
  • 116
  • 1
  • 5
  • 1
    If you have a query object. It's more efficient to get the statement from sqlalchemy and let pandas do the query itself with `pandas.read_sql_query`, passing `query.statement` to it. See this answer: http://stackoverflow.com/a/29528804/1273938 – LeoRochael Jul 31 '15 at 19:21
4

resoverall is a sqlalchemy ResultProxy object. You can read more about it in the sqlalchemy docs, the latter explains basic usage of working with Engines and Connections. Important here is that resoverall is dict like.

Pandas likes dict like objects to create its data structures, see the online docs

Good luck with sqlalchemy and pandas.

Wouter Overmeire
  • 65,766
  • 10
  • 63
  • 43
4

Simply use pandas and pyodbc together. You'll have to modify your connection string (connstr) according to your database specifications.

import pyodbc
import pandas as pd

# MSSQL Connection String Example
connstr = "Server=myServerAddress;Database=myDB;User Id=myUsername;Password=myPass;"

# Query Database and Create DataFrame Using Results
df = pd.read_sql("select * from myTable", pyodbc.connect(connstr))

I've used pyodbc with several enterprise databases (e.g. SQL Server, MySQL, MariaDB, IBM).

openwonk
  • 14,023
  • 7
  • 43
  • 39
  • How to write this dataframe again back to MSSQL using Pyodbc? Otherthan using sqlalchemy – Ramsey Oct 23 '18 at 16:41
  • Use the `to_sql` method on the `DataFrame` object. That method defaults to SQLite, so you have to explicitly pass it an object pointing to the MSSQL database. See [docs](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.to_sql.html). – openwonk Oct 23 '18 at 17:14
  • I tried the below one and I have around 200K rows with 13 columns. It is not completed after 15 minutes also. Any ideas? df.to_sql('tablename',engine,schema='schemaname',if_exists='append',index=False) – Ramsey Oct 24 '18 at 16:21
  • That does seem slow... I would probably need to see whole code in action, sorry. I wish `pandas` was more optimized for light ETL work, but alas... – openwonk Oct 25 '18 at 04:57
3

This question is old, but I wanted to add my two-cents. I read the question as " I want to run a query to my [my]SQL database and store the returned data as Pandas data structure [DataFrame]."

From the code it looks like you mean mysql database and assume you mean pandas DataFrame.

import MySQLdb as mdb
import pandas.io.sql as sql
from pandas import *

conn = mdb.connect('<server>','<user>','<pass>','<db>');
df = sql.read_frame('<query>', conn)

For example,

conn = mdb.connect('localhost','myname','mypass','testdb');
df = sql.read_frame('select * from testTable', conn)

This will import all rows of testTable into a DataFrame.

BubbleGuppies
  • 5,750
  • 7
  • 20
  • 15
1

Long time from last post but maybe it helps someone...

Shorted way than Paul H:

my_dic = session.query(query.all())
my_df = pandas.DataFrame.from_dict(my_dic)
Wtower
  • 18,848
  • 11
  • 103
  • 80
1

best way I do this

db.execute(query) where db=db_class() #database class
    mydata=[x for x in db.fetchall()]
    df=pd.DataFrame(data=mydata)
Mohit S
  • 13,723
  • 6
  • 34
  • 69
Berto
  • 199
  • 2
  • 2
1

Here is mine. Just in case if you are using "pymysql":

import pymysql
from pandas import DataFrame

host   = 'localhost'
port   = 3306
user   = 'yourUserName'
passwd = 'yourPassword'
db     = 'yourDatabase'

cnx    = pymysql.connect(host=host, port=port, user=user, passwd=passwd, db=db)
cur    = cnx.cursor()

query  = """ SELECT * FROM yourTable LIMIT 10"""
cur.execute(query)

field_names = [i[0] for i in cur.description]
get_data = [xx for xx in cur]

cur.close()
cnx.close()

df = DataFrame(get_data)
df.columns = field_names
kennyut
  • 3,671
  • 28
  • 30
1

pandas.io.sql.write_frame is DEPRECATED. https://pandas.pydata.org/pandas-docs/version/0.15.2/generated/pandas.io.sql.write_frame.html

Should change to use pandas.DataFrame.to_sql https://pandas.pydata.org/pandas-docs/stable/reference/api/pandas.DataFrame.to_sql.html

There is another solution. PYODBC to Pandas - DataFrame not working - Shape of passed values is (x,y), indices imply (w,z)

As of Pandas 0.12 (I believe) you can do:

import pandas
import pyodbc

sql = 'select * from table'
cnn = pyodbc.connect(...)

data = pandas.read_sql(sql, cnn)

Prior to 0.12, you could do:

import pandas
from pandas.io.sql import read_frame
import pyodbc

sql = 'select * from table'
cnn = pyodbc.connect(...)

data = read_frame(sql, cnn)
江明哲
  • 27
  • 5
0

If the result type is ResultSet, you should convert it to dictionary first. Then the DataFrame columns will be collected automatically.

This works on my case:

df = pd.DataFrame([dict(r) for r in resoverall])
tanza9
  • 1,497
  • 1
  • 10
  • 8
0

Here is a simple solution I like:

Put your DB connection info in a YAML file in a secure location (do not version it in the code repo).

---
host: 'hostname'
port: port_number_integer
database: 'databasename'
user: 'username'
password: 'password'

Then load the conf in a dictionary, open the db connection and load the result set of the SQL query in a data frame:

import yaml
import pymysql
import pandas as pd

db_conf_path = '/path/to/db-conf.yaml'

# Load DB conf
with open(db_conf_path) as db_conf_file:
    db_conf = yaml.safe_load(db_conf_file)

# Connect to the DB
db_connection = pymysql.connect(**db_conf)

# Load the data into a DF
query = '''
SELECT *
FROM my_table
LIMIT 10
'''

df = pd.read_sql(query, con=db_connection)
Raphvanns
  • 1,766
  • 19
  • 21