What are the difference between Oracle DATE and TIMESTAMP type? Both have date and time component? Also what is corresponding type in Java for these date types?
Asked
Active
Viewed 1.2e+01k times
114
-
3[`TIMESTAMP`](http://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements001.htm#SQLRF00203) is the same as `DATE`, except it has added fractional seconds precision. – NullUserException Oct 02 '13 at 15:49
-
5The biggest difference: `DATE` is accurate to the second and doesn't have fractional seconds. `TIMESTAMP` has fractional seconds. The number of decimal places in the seconds depends on the server OS, for example the Oracle on my Windows 7 machine returns three decimal places for the timestamp whereas a client's huge Solaris box returns six. Timestamps can also hold a specific time zone or be normalized to common time zone - go [here](http://docs.oracle.com/cd/B28359_01/server.111/b28286/sql_elements001.htm) and then search for "TIMESTAMP" for more information, then experiment a little :) – Ed Gibbs Oct 02 '13 at 15:50
2 Answers
118
DATE and TIMESTAMP have the same size (7 bytes). Those bytes are used to store century, decade, year, month, day, hour, minute and seconds. But TIMESTAMP allows to store additional info such as fractional seconds (11 bytes) and fractional seconds with timezone (13 bytes).
TIMESTAMP was added as an ANSI compliant to Oracle. Before that, it had DATE only.
In general cases you should use DATE. But if precision in time is a requirement, use TIMESTAMP.
And about Java, the oracle.sql.DATE class from Oracle JDBC driver, provides conversions between the Oracle Date/Timestamp data type and Java classes java.sql.Date, java.sql.Time and java.sql.Timestamp.

Guillermo Luque
- 1,376
- 1
- 10
- 9
-
1A word of warning: unfortunately it appears that, by default, when you query a DATE column in Oracle, it returns you only the "day", but if you cast it like TO_TIMESTAMP(DATE_COLUMN_NAME) then it returns you more precision. Which somehow isn't the default with jdbc/hibernate, at least it isn't here. – rogerdpack Nov 12 '13 at 20:14
-
7
-
1Hi @siledh, I think you should use TIMESTAMP only when the additional info (timezone) is important for your application. I mean, it depends on your business logic. For instance, you have to choose TIMEZONE when you need to share your data with applications deployed on different platforms, and the time info must be unambiguosly understood between them. – Guillermo Luque Dec 04 '13 at 12:56
-
5You should use `TIMESTAMP WITH TIME ZONE`. Otherwise daylight saving time will introduce ambiguous times. – kmkaplan Feb 07 '15 at 08:12
-
23I find it really confusing, that DATE type contains TIME information. That's not what the word means. – Daddy32 Apr 22 '16 at 09:45
-
5@Daddy32 `TIMESTAMP` was added about 20 years after `DATE`. They couldn't really go back and change `DATE`. – William Robertson Oct 06 '19 at 11:13
-
@GuillermoLuque timestamps are 11 bytes, or 13 for `timestamp with time zone`, which is the other big difference from `date`. – William Robertson Oct 06 '19 at 11:17
-
@WilliamRobertson 20 yrs ago, date still meant date and time still meant time. regardless when it was added, a "date" never meant date AND time. It is question of inconsistency, not question of whether they should go back and change it after using obviious smell from the beginning and built in tech dept – pixel Jun 08 '23 at 15:18
-
@pixel Not 20 years ago, 20 years *after they added `timestamp`*, which I'd have to check but I'd guess was about 20 years ago. Yes, `date` was the wrong name for a datetime type, but this is something that happened in like 1978. They also called the dummy table `dual` for no adequately explained reason. That's just how they rolled back then. – William Robertson Jun 09 '23 at 22:15
33

Marcin Badtke
- 599
- 5
- 9
-
8@mast because this is what I want to do - illustrate the problem/solution. This is what humans are used to - pictures. It is natural and easier to remember. – Marcin Badtke Nov 21 '21 at 09:49
-
2