Spring Sale Special Limited Time 70% Discount Offer - Ends in 0d 00h 00m 00s - Coupon code: scxmas70

CCDAK Exam Dumps - Confluent Certified Developer for Apache Kafka Certification Examination

Searching for workable clues to ace the Confluent CCDAK Exam? You’re on the right place! ExamCert has realistic, trusted and authentic exam prep tools to help you achieve your desired credential. ExamCert’s CCDAK PDF Study Guide, Testing Engine and Exam Dumps follow a reliable exam preparation strategy, providing you the most relevant and updated study material that is crafted in an easy to learn format of questions and answers. ExamCert’s study tools aim at simplifying all complex and confusing concepts of the exam and introduce you to the real exam scenario and practice it with the help of its testing engine and real exam dumps

Go to page:
Question # 25

Which two statements are correct about transactions in Kafka?

(Select two.)

A.

All messages from a failed transaction will be deleted from a Kafka topic.

B.

Transactions are only possible when writing messages to a topic with single partition.

C.

Consumers can consume both committed and uncommitted transactions.

D.

Information about producers and their transactions is stored in the _transaction_state topic.

E.

Transactions guarantee at least once delivery of messages.

Full Access
Question # 26

(You are writing a producer application and need to ensure proper delivery.

You configure the producer with acks=all.

Which two actions should you take to ensure proper error handling?

Select two.)

A.

Check the value of ProducerRecord.status().

B.

Use a callback argument in producer.send() where you check delivery status.

C.

Check that producer.send() returned a RecordMetadata object and is not null.

D.

Surround the call to producer.send() with a try/catch block to catch KafkaException.

Full Access
Question # 27

You use Kafka Connect with the JDBC source connector to extract data from a large database and push it into Kafka.

The database contains tens of tables, and the current connector is unable to process the data fast enough.

You add more Kafka Connect workers, but throughput doesn't improve.

What should you do next?

A.

Increase the number of Kafka partitions for the topics.

B.

Increase the value of the connector's property tasks.max.

C.

Add more Kafka brokers to the cluster.

D.

Modify the database schemas to enable horizontal sharding.

Full Access
Go to page: