In the llama index, if the value of similarity_top_k is set to be very large, such as the number of all blocks, is this equivalent to feeding the entire document to GPT? Will this not exceed the maximum tokens limit? enter image description here
Could someone tell me the principles of the llama index's similarity_top_k?