O(n) means that the cost increases with the number of input elements at a rate that is linear rather than exponential (e.g. O(n^2)) or logarithmic (e.g. O(log(2))), which in these cases is for example the number of elements in a key table. O(n) is bad if you need your algorithm to work efficiently for large values of n, and especially if there is an alternative you could use which scales less than linear proportionately (e.g. O(log(n))).
Time complexity and space complexity are different problems.
Space complexity is only a big problem if for possible values of n you will end up using a problematic amount of memory or storage. O(n) for storage may be expected in many cases, since in order to achieve less than O(n) for some things, you'd need to compress your data, and/or your data might have duplicates. For one basic example, if you have a key/value function where values are large but often duplicates, it might be inefficient to duplicate the value for each key, which would be O(n), so storing in a map rather than an array might be a lot more efficient for space.
Time complexity of worst-case O(n) being bad is often said in the case of index lookup algorithms because O(n) would mean you might have to look at every element in the index to find the one you're looking for. I.e. the algorithm is not much better than just going through the whole list until you find a match. It's inefficient compared to various long-known tree indexing structures, which do take O(n) time - that is, the time to look up something does not increase in linear proportion to the number of elements in the index, because the tree structure reduces the number of needed comparisons to an exponentially shallower curve.
Some other types of algorithm may not have known solutions better than O(n), such as fields of AI agents which all potentially interact with each other.