How to Remove Duplicate Records & Data in SQL? A Step-by-Step Guide with Examples

By Cristian G. Guasch • Updated: 02/09/25 • 3 min read

Let me share my experience with handling duplicate data in SQL databases. I’ve dealt with this issue many times, and I’ll show you some practical ways to clean up those pesky duplicates!

Why Duplicates Are a Headache

Let me tell you why I take duplicate data seriously. Imagine having multiple copies of the same customer record in your database – it’s like having multiple copies of the same contact in your phone! Here’s why this drives me crazy:

  • It messes up your reports: When I’m trying to count unique customers, duplicates make my numbers incorrect
  • It slows everything down: My queries take longer to run because they’re processing the same data multiple times
  • It wastes storage space: Why pay for extra storage when you’re just storing the same information repeatedly?

How I Spot Duplicates

Before I clean up duplicates, I need to find them first. Here are my go-to methods:

The Simple Way: Using DISTINCT

This is the easiest method I use when I just want to see unique values:

sqlCopySELECT DISTINCT name, email, phone
FROM customers;

The Detective Way: Finding Exact Duplicates

When I want to actually see which records are duplicated:

sqlCopySELECT name, email, phone, COUNT(*) as duplicate_count
FROM customers
GROUP BY name, email, phone
HAVING COUNT(*) > 1;

My Favorite Ways to Remove Duplicates

Let me share the methods I use most often to clean up duplicate data:

1. The Quick Way: Using ROW_NUMBER()

This is my favorite method because it’s clean and efficient:

sqlCopyWITH DuplicateCTE AS (
    SELECT *,
        ROW_NUMBER() OVER (
            PARTITION BY name, email, phone
            ORDER BY id
        ) as row_num
    FROM customers
)
DELETE FROM DuplicateCTE
WHERE row_num > 1;

2. The Safe Way: Creating a Clean Table

When I want to be extra careful with important data:

sqlCopy-- First, I create a new table with only unique records
SELECT DISTINCT name, email, phone
INTO customers_clean
FROM customers;

-- Then, if everything looks good, I can replace the old table
DROP TABLE customers;
RENAME TABLE customers_clean TO customers;

Tips From My Experience

Here’s what I’ve learned from cleaning up lots of duplicate data:

  • Always backup first: I can’t stress this enough – make a backup before deleting anything!
  • Start small: I always test my duplicate removal queries on a small subset of data first
  • Check your work: After removing duplicates, I double-check that I kept the right records
  • Prevent future duplicates: I add unique constraints to prevent duplicates from coming back

Keeping Your Data Clean

Here’s how I prevent duplicates from happening in the first place:

  1. Use primary keys: I always add a unique identifier for each record
  2. Add unique constraints: I put these on columns that should never have duplicates
  3. Regular checks: I run duplicate checks regularly to catch issues early

Would you like me to explain any of these methods in more detail? Or shall we talk about how to prevent duplicates in specific situations? I’m happy to share more from my experience!

Remember, cleaning up duplicates might seem scary at first, but with these methods, it becomes much more manageable. The key is to be careful, test thoroughly, and always have a backup plan!