In ros, there are several different frames, as BTables was trying to get to. If you're trying to get a position estimation from sensor data, using the robot_localization package, familiarize yourself with the different kinds of frames and ros message types / data that go in & out of that process.
Normally in ros, there are 3 main frames: map -> odom -> base_link. The base_link is the point on the robot that represents it. The odom frame tracks integrated velocity/acceleration sensor updates to have a continuous position; it's origin is usually wherever the robot boots up. The map frame is the "real world" location of the robot. It does require an origin position and yaw, because otherwise it's arbitrary.
In your case, it seems like you want to anchor the robot within the longitude/latitude coordinate frame. My recommendation is still to pick an origin in your environment, if you can, otherwise you can use the boot-up location as your origin.
So to do that, I'm assuming your odom->base_link transform is an EKF or UKF node (from robot_localization), using the IMU data from your quadcopter. Then your map->odom transform is another EKF or UKF that also takes in an absolute position in your map frame as an update. (See example launch file (navsat notes) and yaml config file (navsat notes) and more from the github). You can then use your fix
topic (sensor_msgs/NavSatFix) from hector gazebo gps plugin with the navsat_transform_node to get a position update for your estimator above and map transforms to the global or utm coordinate system.
If you don't care about your position with respect to the world coming out, this gets a bit simpler, otherwise this also has the features to report your position back out to lat/long.