5

We ran into a particular issue with our Oracle table definition (DDL) and in one of our PL/SQL scripts.

The issue is, there was a change in the table, changing from varchar(20) to varchar(30), this change however, was not diligently reflected in one of our PL/SQL scripts consuming data, which was still varchar(20), causing a ORA-06502: PL/SQL: numeric or value error error during one of our regression tests.

I would like to seek advise from Oracle and database experts here, whether you have encountered such scenarios in the past, whereby there were changes to table DDL, and were not reflected in the PL/SQL and how you deal with this gap.

I know one easy way would be some form of enforcement or paperwork, but would there happen be more beautiful or elegant solution, i.e. the way foreign keys work to avoid insert/update/delete anomaly?

With thanks!

Oh Chin Boon
  • 23,028
  • 51
  • 143
  • 215

1 Answers1

6

For starters, you should always declare your variables as TYPEs based on the column definitions in your tables:

I.e., instead of:

dept_name  VARCHAR2(50);

Use:

dept_name  dept.dept_name%TYPE;

That way, when your base table changes, your declaration is still valid.

You can also declare your procedural parameters as types as well:

PROCEDURE proc(p1 IN dept.dept_name%TYPE)
DCookie
  • 42,630
  • 11
  • 83
  • 92
  • 1
    +1 Generally this works well. You still have to be wary if there is an expectation of a size limit. One example would be an address which is to be printed on an envelope where an enlarged database column wouldn't fit. – Gary Myers Jun 30 '11 at 04:48
  • @Chin Boon, it certainly was something the PL/SQL design team got right! – DCookie Jun 30 '11 at 05:04
  • @Gary, agreed. There is no perfect solution. Even languages that offer string types that you just keep stuffing more into, e.g., Perl, suffer from this limitation. That pesky real world keeps getting in the way ;-) – DCookie Jun 30 '11 at 05:07