There is no problem in running a CPU at 100%.
Even in the unlikely case that your specific hardware had a cooling problem leading to overheating on , as this is an AWS server, that'd be Amazon's issue, not yours (rest assured, they took that into account in their pricing model)
If it didn't do that job, it would be sittling idle, so if you need to have $job done, better have it doing it. You don't want to artificially restrict it.
The main disadvantage would be using the CPU continuously at 100% will need more power. But you wanted that task done, right?¹
(¹ Do note that in some cases like bitcoin mining, the cost of electricity is higher than the value of the mined bitcoins)
Second, if the system CPU is fully used at 100% doing some not-too-important task (like crunching SETI packets), it could happen that something more important arrived (such as an interactive request by the owner), but the computer doesn't pay attention to that very promptly because it was busy processing those packets. This is solved by nicing that less-important task. Then the system knowns how to prioritise them and you avoid this problem.
In some places you may see that it is bad to have a server working at 100%. A server with CPU at 100% shows a bottleneck in the process. You could produce more with more cpus or quicker ones, but as long as you are happy enough with the throughput, it's ok. You can think on it as a shop where all clerks where always busy. This is probably bad, as more customers can't shop there since they can't be served.
However, if we have a warehouse with items to sort, with no special deadline, and enough work for the following 5 years, you would want have everyone working full time on it, not keeping someone idle.
If the warehouse is near the shop, you can do combine things: you have the clerks serving customers, and when there are no customers left, they advance sorting the warehouse, until the next client arrives.
Traditionally, you have certain dedicated hardware and it's up to you to use it more or less. In a model like AWS, you have more options, though. (Note: I am assuming your task is made up by many small, easily parallelizable chunks)
- Use a single instance of size X for as long as needed
- Use a faster instance of size X+n
- Use a slower but cheaper instance, taking more time
- Use multiple instances
In some cases you could use several smaller instances for the cost of a big one, getting more results (while for other task sets it wouldn't).
Plus, the costs aren't fixed. You can probably benefit by launching extra instances off-hours, when they are cheaper but shrinking them when it'd be more expensive. Suppose you were able to borrow the clerk of nearby stores (at certain variable rate). The open-24-hours shop could happily let you have the employee doing the night shift sort some of your warehouse items quite cheaply, since only a handful of customers will pass by. However, if you wanted some extra pair of hands on Black Friday, that would be much more expensive. (in fact, better not to have anyone left sorting the warehouse that day)
AWS lets you do a lot of dynamic load, and when you don't have to the responses in X time, you can optimize your costs noticeably. However, they have "too many options", and they are complex to understand. You also need to understand pretty well your workload, in order to take the right decisions.