It depends - often a multi-layered approach is required.
For example, you may have a set of simple business rules that can be maintained directly within the database as check constraints. I only use these for simple, self-contained checks.
Example - checking a Y/N flag:
alter table example_table add constraint ck_example_table_flag
check (flag is null or flag in ('Y', 'N');
In this case, if you get any other value in this column you know the data is bad and this isn't a rule that's going to change over time. Adding this constraint will prevent any application from corrupting your data.
For more complicated business rules that may not be self-contained, for example if a date depends on the person entering the value or on the contents of another field in another table, then I would handle this in the 'application'.
Sometimes the 'application' is the database - in this case your check constraints may be in table triggers or packaged pl/sql that handles all table updates. For example:
example_app.update_data(id => 1000, value => 'foo');
If this is how your data updates are handled, there's nothing wrong with handling your business rules as check constraints, triggers with checks, or directly in the update_data
procedure.
In many Enterprise environments, the application layer and the database layer will be handled by different teams. If you're in the team supporting the application layer you may find it just too painful to raise a change request for the database team to put a check constraint in place - in this case adding a bit of logic to the application layer will feel more straightforward. Conversely, the database team may find it more straightforward.
@Bohemian makes an excellent point about using check constraints to enforce data integrity when there are multiple clients.