Skip to main content

Posts

Kafka Consumer Configuration

At Most once  Offsets are committed as soon as messages are received, in case of processing fails those messages will be lost, let's say the consumer went down, when the consumer comes back it will start reading from the point of the last committed offsets. At least once In this case, offsets are committed after processing the message batch, in case of consumer failure, the same messages are read twice so processing also happens twice, so make sure for idempotency for the system. Exactly Once This can be achieved with Kadka Transactional APIS, [easy with Kafka stream apis], Consumer Offset Reset behavior  auto.offset.reset:latest -> start reading from end  auto.offset.reset:earlierest -> start reading from start. auto.offset.reset.none-> throw NP if the offset was not found. Note- consumers offset can be lost  kafkaV<2.0 [consumer hasn't read data in 1 day ] kafkaV>2.0 [consumer hasn't read data in 7 days] This can be controlled by offset.retention.mi...

Software Engineering Interview Preparation GitRepos

Here is a list of git repositories for interview preparation. LinkedIn Interview Tips - Explore valuable interview tips and strategies from LinkedIn to boost your interview preparation. Mastering Behavioral Interviews - Learn how to excel in behavioral interviews and showcase your skills effectively. Technical Interview Practice - Access resources for practicing technical interview questions to sharpen your coding and problem-solving abilities. Resume Building Guide - Craft an impressive resume with this guide, crucial for making a strong first impression in interviews. Mock Interview Sessions - Participate in mock interview sessions to gain confidence and refine your interview skills. Networking for Job Seekers - Discover the power of networking and how it can help you secure interviews and job opportunities. Interview Dress Code Tips - Get advice on what to wear for interviews, ensuring you make a professional impression. Cracking the Coding Interview - Access is a popular bo...

Kafka Producer Configuration

 This article will show the Kafka producer config. Acks=0 Producer assumes a message written successfully to the topic at the same time the message is sent, without waiting for the broker to accept it all, it  If the broker goes offline at the same will lose the data  Suitable for the scenario where potentially okay to lose the data, like metrics collections. In this case, producer throughput is highest as less network overhead  Acks=1 Producer assumes a message written successfully to the topic if gets ack from the leader. It's the default behavior for Kafka 1 and 2.8 Leader response is requested, but replication is not guaranteed as it happens in the background. If the broker leader goes offline unexpectedly, at the same time replication does not happen possible data loss. If acks are not received, the Producer will retry to send the message again. Acks=All [-1] it's default for kafka V3+. In this case will wait for acceptance from all sync replicas. No data loss....

Realtime project version migration

  This article is about version migration for any language and related framework. Now we have a plugin that can be added as a dependency into the project, by running the build it will fix most of the required changes as a patch which can be applied over the project to migrate, At the same time the changes that are not possible update will get hints for that also, for more details please refer the below website and go over your project specific language and framework. Java 17 migration  https://docs.openrewrite.org/running-recipes/popular-recipe-guides/migrate-to-java-17 SpringBoot 3 migration  https://docs.openrewrite.org/recipes/java/spring/boot3/upgradespringboot_3_1

Sql Complex queries [Over and Partition by]

 Over and Partition by clause  Useful when want to select aggregated results with non-aggregated columns, group by not useful in this case. Example   User table having username, name, and country field,  here we want user count by country  select username as email , name , count ( country ) over ( partition by country ) as tolaUserByCountry , country from User order by tolaUserByCountry desc; -- Another solution is using joins select reseller. username as email , reseller. name , tolaUserByCountry from User join ( select country , count ( country ) as tolaUserByCountry from User group by country ) cr on reseller. country = cr. country order by tolaUserByCountry desc ;

Getting Master in Hibernate and JPA

  Setup H2 db Add required dependency for h2  Enable UI by spring.h2.console.enabled = true Launch the app - check the logs for URL and dbUrl ,  provide dburl over console login App logs 05:30  INFO 6117 --- [           main] o.a.c.c.C.[Tomcat].[localhost].[/]       : Initializing Spring embedded WebApplicationContext 2023-06-30T17:46:18.383+05:30  INFO 6117 --- [           main] w.s.c.ServletWebServerApplicationContext : Root WebApplicationContext: initialization completed in 1295 ms 2023-06-30T17:46:18.429+05:30  INFO 6117 --- [           main] com.zaxxer.hikari.HikariDataSource       : HikariPool-1 - Starting... 2023-06-30T17:46:18.591+05:30  INFO 6117 --- [           main] com.zaxxer.hikari.pool.HikariPool        : HikariPool-1 - Added connection conn0: url=jdbc:h2:mem:d...