I agree this seems like an omission, but thinking it through it's hard to see what benefits it might provide.
The main use case for CASCADE CONSTRAINTS option is when re-building a schema - usually in development - when the overhead of putting the DROP TABLE statements in the right order is too much effort. Because we are re-building the schema all the foreign key constraints should be restored, so we don't really need to know what they are.
This does assume that we have build scripts properly maintained and kept under source control. If we are not in that happy situation, dropping a table and cascading the foreign key constraints is reckless.
Similarly, if the intention is to drop the table and keep it dropped, then we should have undertaken an impact analysis which would have identified the foreign keys before dropping the table. Probably by running the sort of query you have in your question :) But your reference to a script suggests this is not the scenario you have in mind.
We can image Oracle implementing this feature with something like the RETURNING ... INTO syntax supported by DML statements. However there is problem, which is highlighted by a gap in your query. Other schemas can build foreign keys referencing our table (if we have granted the REFERENCE privilege) so the query should be over ALL_CONSTRAINTS and return OWNER as well as CONSTRAINT_NAME. This means the posited RETURNING ... INTO feature would need to populate two nested tables - or one table of a complex type - which probably requires lots of low-level jiggery-pokery (to use the technical term) without delivering much benefit, because the use case is so narrow.