As far as SQL Server is concerned, the column limit on a normal table is 1,024. So I would consider any number of columns approaching that limit to be large. That said, you can use wide tables to extend the limit up to 30,000 but there are tradeoffs:
A wide table is a table that has defined a column set. Wide tables use sparse columns to increase the total of columns that a table can have to 30,000. The number of indexes and statistics is also increased to 1,000 and 30,000, respectively. The maximum size of a wide table row is 8,019 bytes. Therefore, most of the data in any particular row should be NULL. To create or change a table into a wide table, you add a column set to the table definition. The maximum number of nonsparse columns plus computed columns in a wide table remains 1,024.
By using wide tables, you can create flexible schemas within an application. You can add or drop columns whenever you want. Keep in mind that using wide tables has unique performance considerations, such as increased run-time and compile-time memory requirements.
According to this thread, the limit for PostgreSQL is 1,600 columns per table.
Based on these numbers, I would suggest any number of columns that approaches 1,000 to be huge.