Skip to main content

Sql Complex queries [Over and Partition by]

 Over and Partition by clause 

Useful when want to select aggregated results with non-aggregated columns, group by not useful in this case.


Example  

User table having username, name, and country field,  here we want user count by country 


select username as email,
name,
count(country) over
(partition by country) as tolaUserByCountry,
country
from User
order by tolaUserByCountry desc;

-- Another solution is using joins



select reseller.username as email,
reseller.name,
tolaUserByCountry
from User
join (select country, count(country)
as tolaUserByCountry from User group by country ) cr
on reseller.country = cr.country order by tolaUserByCountry desc ;






Row Number


select transid, row_number() over (order by chargedamount desc )
from paymentdetails limit 5;


top 5 records of with charged amount decreasing order.


Can find nth amount.

select pd.transid, pd.chargedamount
from (select transid, chargedamount, row_number() over (order by chargedamount desc ) as id
from paymentdetails) as pd
where pd.id = 4;



There may be cases where find the best from each country or location or something else.

select pd.transid, pd.chargedamount , pd.country
from (select transid, chargedamount,country, row_number() over (partition by country order by chargedamount desc ) as id
from paymentdetails) as pd
where pd.id = 1;

























Comments

Popular posts from this blog

Advanced Kafka Resilience: Dead-Letter Queues, Circuit Breakers, and Exactly-Once Delivery

Introduction In distributed systems, failures are inevitable—network partitions, broker crashes, or consumer lag can disrupt data flow. While retries help recover from transient issues, you need stronger guarantees for mission-critical systems. This guide covers three advanced Kafka resilience patterns: Dead-Letter Queues (DLQs) – Handle poison pills and unprocessable messages. Circuit Breakers – Prevent cascading failures when Kafka is unhealthy. Exactly-Once Delivery – Avoid duplicates in financial/transactional systems. Let's dive in! 1. Dead-Letter Queues (DLQs) in Kafka What is a DLQ? A dedicated Kafka topic where "failed" messages are sent after max retries (e.g., malformed payloads, unrecoverable errors). ...

Project Reactor Important Methods Cheat Sheet

🔹 1️⃣ subscribeOn – "Decides WHERE the Pipeline Starts" 📝 Definition: subscribeOn influences the thread where the data source (upstream) (e.g., data generation, API calls) runs . It affects the source and everything downstream (until a publishOn switches it). Flux<Integer> flux = Flux.range(1, 3) .doOnNext(i -> System.out.println("[Generating] " + i + " on " + Thread.currentThread().getName())) .subscribeOn(Schedulers.boundedElastic()) // Change starting thread .map(i -> { System.out.println("[Processing] " + i + " on " + Thread.currentThread().getName()); return i * 10; }); flux.blockLast(); Output: [Generating] 1 on boundedElastic-1 [Processing] 1 on boundedElastic-1 [Generating] 2 on boundedElastic-1 [Processing] 2 on boundedElastic-1 [Generating] 3 on boundedElastic-1 [Processing] 3 on boundedElastic-1 📢 Key Insight: ...

🔄 Kafka Producer Internals: send() Explained with Delivery Semantics and Transactions

Kafka Producer Internal Working Apache Kafka is known for its high-throughput, fault-tolerant message streaming system. At the heart of Kafka's data pipeline is the Producer —responsible for publishing data to Kafka topics. This blog dives deep into the internal workings of the Kafka Producer, especially what happens under the hood when send() is called. We'll also break down different delivery guarantees and transactional semantics with diagrams. 🧠 Table of Contents Kafka Producer Architecture Overview What Happens When send() is Called Delivery Semantics Kafka Transactions & Idempotence Error Handling and Retries Diagram: Kafka Producer Internals Conclusion 🏗️ Kafka Producer Architecture Overview Kafka Producer is composed of the following core components: Serializer : Converts key/value to bytes. Partitioner : Determines which partition a record should go to. Accumulator : Buffers the records in memory be...