Your package could attempt to buffer all 500.000 rows into memory if you have asynchronous/blocking components (Sort/Aggregate) because data can't flow beyond that point until every row has been sent from the source and has reached that component. Only then can SSIS determine the maximum value for column X or that all the rows have been sorted by key Y.
If your machine runs out of memory, then you will have a event recorded for buffers spilled to disk (approximate name). That means that your high performance in memory ETL engine has now begun writing the data to disk. Performance is going to suffer mightily at this point because as all that data gets written to disk so that you can get through that blocking component, guess what? That written data has to get read back from disk now that whatever calculation had to happen has happened. And if you happened to do something like Sort my data in the data flow followed by aggregate data, you just paid for doubly poor performance.
That said, if you're using just the synchronous components, there are mechanisms built into the data flow to detect back pressure. Thus, your target destination can't keep up with the source flow it will signal that component to send fewer records until it can catch up. Pretty clever stuff but nothing you as a developer can really influence beyond not adding asynchronous components unless the solution requires it.