If you're trying to consume new batches of 100 messages from your consumer group, you should set max_bytes to a value that, for your data model, will always return roughly 100 records. You can have a logic that is more conservative (get less and then get some more until cutoff at 100) or you can get always more and then ignore. In both ways you should adopt a manual offset management of your consumer group.
GET /consumers/testgroup/instances/my_consumer/records?max_bytes=300000
If you get more than 100 messages and for some reason you ignorem them, you will not receive them again on that consumer group if offset auto commit is enabled (it's defined when you created your consumer). You probably don't want this to happen!
If you're manually commiting offsets, than you can ignore whatever you want if you then commit the correct offsets to guarantee you don't loose any message. You can manually commit your offsets like this:
POST /consumers/testgroup/instances/my_consumer/offsets HTTP/1.1
Host: proxy-instance.kafkaproxy.example.com
Content-Type: application/vnd.kafka.v2+json
{
"offsets": [
{
"topic": "test",
"partition": 0,
"offset": <calculated offset ending where you stopped consuming for this partition>
},
{
"topic": "test",
"partition": 1,
"offset": <calculated offset ending where you stopped consuming for this partition>
}
]
}
If you're trying to get exactly the first 100 records of the topic, than you need to reset the consumer group offsets for that topic and each partition before you consume once agaion. You can do it this way (taken from confluent):
POST /consumers/testgroup/instances/my_consumer/offsets HTTP/1.1
Host: proxy-instance.kafkaproxy.example.com
Content-Type: application/vnd.kafka.v2+json
{
"offsets": [
{
"topic": "test",
"partition": 0,
"offset": 0
},
{
"topic": "test",
"partition": 1,
"offset": 0
}
]
}