Questions tagged [exactly-once]
39 questions
0
votes
1 answer
Does RDD re computation on task failure cause duplicate data processing?
When a particular task fails that causes RDD to be recomputed from lineage (maybe by reading input file again), how does Spark ensure that there is no duplicate processing of data? What if the task that failed had written half of the data to some…

sanjay
- 3
- 1
0
votes
1 answer
Is it possible to achieve Exacly Once Semantics using a BASE-fashioned database?
In Stream Processing applications (f. e. based on Apache Flink or Apache Spark Streaming) it is sometimes necessary to process data exactly once.
In the database world something equal be achieved by using databases that follow the ACID criteria…

MW.
- 544
- 5
- 19
0
votes
1 answer
Flink exactly once semantics and data loss
We have a Flink setup with Kafka producer currently using at-least-once semantics. We are considering switching to exactly-once semantics regarding the Kafka producer as this would bring us benefits down the pipeline. Considering the documentation…

Yordan Pavlov
- 1,303
- 2
- 13
- 26
0
votes
1 answer
Reverting the Transactional Outbox Pattern
Problem Decription:
It is not viable to use a distributed transaction that spans the database and the message broker to atomically update the database and publish messages/events.
The outbox pattern describes an approach for letting services execute…

aballaci
- 1,043
- 8
- 19
0
votes
1 answer
Spring Cloud Stream project with Failed to obtain partition information Error
When I use this configuration:
spring:
cloud:
stream:
kafka:
binder:
min-partition-count: 1
replication-factor: 1
kafka:
producer:
transaction-id-prefix: tx-
retries: 1
acks: all
My…

KCOtzen
- 866
- 1
- 9
- 11
0
votes
0 answers
Is there a way to get committed offset in EOS Kafka stream?
Background :
Setting consumer interceptor to StreamsConfig will ensure that the interceptor(s) are called when messages are consumed/committed. Snippet from org.apache.kafka.clients.consumer.internals.ConsumerCoordinator#commitOffsetsSync
if…

Vinodhini Chockalingam
- 306
- 2
- 17
0
votes
1 answer
Exactly once semantic with spring kafka
Im trying to test my exactly once configuration to make sure all the configs i set are correct and the behavior is as i expect
I seem to encounter a problem with duplicate sends
public static void main(String[] args) {
MessageProducer…

sharon gur
- 343
- 5
- 22
0
votes
1 answer
Not able to Produce Message to Kafka topic using Transactional.Sink in Alpakka but I see idempotent producer is enabled
Hi I was trying to use Producer api like shown in Alpakka documentation.
I'm able to consume record using Transactional source and Producer is created but not able to put message to topic
Not able to Produce to topic using Transactional.Sink in…

Appy
- 63
- 1
- 4
0
votes
1 answer
Kafka Failed to rebalance when PROCESSING_GUARANTEE_CONFIG set to EXACTLY_ONCE
I have a Kafka stream application that works fine. However when I add the property:
properties.put(StreamsConfig.PROCESSING_GUARANTEE_CONFIG, StreamsConfig.EXACTLY_ONCE);
Then I get the following error:
Exception in thread…

Lasse Frandsen
- 141
- 1
- 7