I am new to performance testing.
Then you should have a mentor that you will be working with for a period of time. The tool selected represents about 5-15% of the solution set for conducting a performance test.
Write down your requirements for reporting, analysis, the skills of your team, the applications and hosts you need to monitor, the environment your tool needs to run in. Use these requirements as a lens through which to view your options.
As to the accuracy, I would invite you to take the following into consideration as part of your testing efforts: Evaluate the tools technically. Look at the load from one live user on your box in terms of IP connections, HTTP sessions, etc..., then using the same tools look at how your tool generates its load. You will find some differences. Some tools, quite popular, will use a smaller number of sessions and piggyback requests from multiple users across an open connection. This will reduce the number of connections versus a natural population and make your job of finding bottlenecks in the session side of the house very difficult. I will not identify the tool/tools so inclined, that will be for your research. You will want to perform this evaluation for one, five and ten users to see how your load is shaped and the structural differences between the tools.
The big commercial vendors all have smaller virtual user versions of their tools at no charge. You can use these versions to evaluate against your requirements and your technical evaluation of a match to true user behavior