What is the largest allowable number for a Cassandra decimal type that I can add to the db without it blowing up.
2 Answers
decimal
in CQL is Java's java.math.BigDecimal
(see CQL documentation). And because it's arbitrary precision, then it's limited by Cassandra's limits on the cell's size that is 2Gb max (but 1Mb is recommended). Here is good discussion about limits of BigDecimal class, and here is a list of Cassandra's limits.

- 80,552
- 8
- 87
- 132
I was expecting that it should be easy to find an answer to this question on the web, but I was surprised that it most definitely isn't :-( So I set out to figure it out for myself:
The Cassandra documentation just says that the decimal
type is "variable precision", but doesn't give any clue on whether there is any limit to its precision or scale. Datastax's documentation for Cassandra provides an additional clue: That this type is implemented using Java's java.math.BigDecimal
. So one gets the impression that to understand its limits, you need to inspect Java's BigDecimal documentation. That documentation claims that the scale (decimal mantissa) of a BigDecimal is a 32-bit signed integer. The unscaled value is an unlimited integer (although in practice is obviously limited by memory).
What does all of this mean?
First, there is a smallest (closest to zero) representable value, which is the smallest integer (1) times the most negative scale (-2147483648
), so the smallest value is 10^-2147483648
.
There is no such limit for the largest representable value. The largest scale is 10^2147483647
but the unscaled value which this multiplies is an integer which can be arbitrarily large! However, there is a problem - for higher numbers we can no longer represent powers of ten efficiently. The number 10^3147483647
would need one billion digits to be represented, all of them but one are zero.
After all this is said, comes a question whether Cassandra actually use such high numbers. If you try to assign the puny 10^309
to a decimal column using the CQL command
INSERT INTO tab (p, dec) VALUES (0, 1e309)
the result is the mysterious message:
Failed parsing statement: [INSERT INTO tab (p, dec) VALUES (0, 1e309)] reason: NumberFormatException null
It turns out that this such a statement would fail for values over the maximum of a double value (a little over 1e308
), despite this statement not assigning a double at all!
However, all is not lost. It is possible to insert bigger numbers using a prepared statement. For example, using the Python CQL driver, one can do something like:
stmt = cql.prepare(f"INSERT INTO {table1} (p, dec) VALUES ({p}, ?)")
cql.execute(stmt, [Decimal('1e10000')])
and this works as expected.
As expected due to the explanation above 1e2147483647
works well.
So does 1e2147483648
.
The Cassandra Python driver has a bug if you try to even bigger exponents - e.g., if you try 1e2147483649
, it results in a mysterious error message from the Python driver:
E TypeError: Received an argument of invalid type for column "dec". Expected: <class 'cassandra.cqltypes.DecimalType'>, Got: <class 'decimal.Decimal'>; ('i' format requires -2147483648 <= number <= 2147483647)
But this is still not a limitation - you can write 10000e2147483648
and the result works, and means 1e2147483652
.
This demonstrates my claim above, that 1e2147483647
is not a hard limit, but still you'd be wise to limit yourself to that number if: 1. You want it to be easily represented in the Python driver, and 2. If you don't want such numbers with just one significant digit but large scale to take huge amount of memory.

- 11,785
- 1
- 24
- 45