0

The dbapi spec says that the cursor.description function should return a 7 item sequence of info for each field in a query result.

It also states:

The first two items (name and type_code) are mandatory, the other five are optional and are set to None if no meaningful values can be provided.

The info returned by the built-in sqlite3 driver only provides the field name and has None for all other attributes.

I have connected to the sqlite database using the parameter detect_types=sqlite3.PARSE_DECLTYPES and of course created my table with typed fields.

Am I missing something? Is there some trick to it? Why does sqlite3's cursor.description not return the type data as required by PEP249?

AllAboutMike
  • 121
  • 7

1 Answers1

0

Why does sqlite3's cursor.description not return the type data as required by PEP249?

The developers behind this module, as per the issue filed (migrated link), decided that there "is no guarantee that all any column in a SQlite resultset always has the same type. That's why (it was) decided to err on the side of setting the type code to "undefined"."

You may wish to try the alternative methods to get the column datatype as listed in this thread.

metatoaster
  • 17,419
  • 5
  • 55
  • 66
  • Disappointed. I get that sqlite doesn't always return the type that the cursor description might suggest it would, but that is a problem for the application surely. Without this info, my application can't even decide what it's *supposed* to be. – AllAboutMike Jan 10 '23 at 06:29
  • Thanks for the link, though I had been there already. I could put up with getting whole-table type info from the pragma functions, but that's not useful for a query with fields from multiple tables. The "scan the returned row" method is better for that, but it fails for records that have any nulls in fields. – AllAboutMike Jan 10 '23 at 06:35
  • In some other languages with stricter typing requirements, they would require the underlying table schema (e.g, written as an SQL `CREATE TABLE` statement) and the library will parse that and derive the type for the program before the program will compile. If you have the schema beforehand that would be an approach, otherwise you will need to extract the schema from the sqlite file and deal with that. Python's standard library does not make it easy to do this. – metatoaster Jan 10 '23 at 09:12
  • If using alternative libraries is an option, you may wish to look into using [SQLAlchemy reflection](https://docs.sqlalchemy.org/en/20/core/reflection.html#reflecting-all-tables-at-once), which offers a straightforward way to get at the schema (plus not be restricted to sqlite). – metatoaster Jan 10 '23 at 09:14
  • Thanks @metatoaster, I do have the schema, I was just hoping to write some code that would not blow up if I change it. I'll just have to assume the types and columns stay fixed and define them in the application code. I'm familiar with sqlalchemy, but I don't like the ORM style at all. As for the database, I'm happy enough with sqlite for a local cache (android app) and I'll be using a "real" rdbm for the central database. – AllAboutMike Jan 14 '23 at 03:57
  • Then just trust (well, validate, too) that the data in the sqlite db will follow the schema. SQLAlchemy reflection does not use ORM, it follow the table [schema metadata](https://docs.sqlalchemy.org/en/20/core/schema.html) way of doing things and is the layer below the ORM (i.e. their ORM is implemented on top of this, thus usage of ORM is optional). – metatoaster Jan 14 '23 at 08:23