Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
8 changes: 8 additions & 0 deletions absolute-beginners/backend-beginner/testing/_category_.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,8 @@
{
"label": "Testing",
"position": 10,
"link": {
"type": "generated-index",
"description": "Don't just write code, write reliable code. Learn the different levels of testing to ensure your backend is bug-free and production-ready."
}
}
Original file line number Diff line number Diff line change
@@ -0,0 +1,93 @@
---
sidebar_position: 4
title: "Functional & E2E Testing"
sidebar_label: "4. Functional Testing"
description: "Learn how to test your API endpoints from the outside-in to ensure the business logic works for the user."
---

Functional testing (often called **End-to-End** or **Black Box** testing) doesn't care about your clean code, your design patterns, or your variable names. It only cares about one thing: **"Does the feature actually work for the user?"**

In the **CodeHarborHub** backend, this usually means sending a real HTTP request to your API and checking if you get the correct HTTP response.

## The "Black Box" Concept

Imagine your API is a black box. You can't see inside it.
1. You push a button (Send a `POST` request to `/api/register`).
2. Something happens inside.
3. You check the result (Did I get a `201 Created` status and a Welcome email?).

## Functional vs. Unit Testing

| Feature | Unit Testing | Functional Testing |
| :--- | :--- | :--- |
| **Viewpoint** | Developer (White Box) | User (Black Box) |
| **Goal** | Correctness of logic | Correctness of feature |
| **Example** | Testing the `sum()` function | Testing the `Checkout` process |
| **Dependencies** | Mocked (Fake) | Real (Server + DB) |

## Tools for Functional Testing

To test your API endpoints without opening a browser or using Postman manually, we use **Supertest**. It allows us to "simulate" HTTP requests inside our Jest tests.

### Example: Testing the Signup Endpoint

```javascript
import request from 'supertest';
import app from '../app'; // Your Express app
import { prisma } from '../lib/prisma';

describe('POST /api/auth/signup', () => {

test('should create a new user and return 201', async () => {
// 1. Send the request
const response = await request(app)
.post('/api/auth/signup')
.send({
name: 'Ajay Dhangar',
email: 'test@codeharborhub.com',
password: 'securePassword123'
});

// 2. Assert the HTTP Status
expect(response.status).toBe(201);

// 3. Assert the Response Body
expect(response.body).toHaveProperty('id');
expect(response.body.name).toBe('Ajay Dhangar');

// 4. Verification: Is it actually in the DB?
const userInDb = await prisma.user.findUnique({
where: { email: 'test@codeharborhub.com' }
});
expect(userInDb).not.toBeNull();
});

test('should return 400 if email is missing', async () => {
const response = await request(app)
.post('/api/auth/signup')
.send({ name: 'Ajay' });

expect(response.status).toBe(400);
expect(response.body.message).toMatch(/required/);
});
});
```

## The "Happy Path" vs. "Edge Cases"

In functional testing at **CodeHarborHub**, you must test both:

1. **The Happy Path:** Everything goes perfectly (User enters correct data, server is up).
2. **The Sad Path:** The user makes a mistake (Invalid email, password too short).
3. **The Edge Case:** What happens if a user tries to register with an email that already exists?

## Summary Checklist

* [x] I understand that Functional Testing is "Black Box" testing.
* [x] I know that Functional Tests check the API from the user's perspective.
* [x] I can use **Supertest** to simulate HTTP requests.
* [x] I understand the importance of testing "Sad Paths" and "Edge Cases."

:::info Best Practice
Functional tests are slower than unit tests because they start the entire server and talk to the database. Run them **after** your unit tests have passed to catch "big picture" bugs before you deploy to production!
:::
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
---
sidebar_position: 3
title: Integration Testing
sidebar_label: "3. Integration Testing"
description: "Learn how to test the interaction between different modules, such as your code and the database."
---

While Unit Tests prove that a single brick is strong, **Integration Tests** prove that the mortar (the glue) holds the bricks together to form a wall.

In a typical **CodeHarborHub** backend, this means testing if your **Service Layer** can successfully talk to your **Database** or an **External API**.


## 🧐 The "Why" Behind Integration Tests

You might have a perfectly working `User` object and a perfectly working `Database`. But if the User object expects a `firstName` and the Database table is named `first_name`, your app will crash.

**Unit tests won't catch this. Integration tests will.**

## What Are We Testing?

In integration testing, we move beyond simple logic and start testing the "edges" of our application:

1. **Database Integration:** Does my query actually return data from PostgreSQL?
2. **API Integration:** Does my app correctly parse the JSON response from a payment gateway?
3. **File System:** Can my app successfully write a PDF report to the `/uploads` folder?

## Setting Up the Environment

Because integration tests touch real systems, they are slower and more complex than unit tests. Here is the professional workflow we use:

<Tabs>
<TabItem value="db-test" label="🗄️ The Test Database" default>
**Never** run integration tests against your "Production" or "Development" database.
1. Create a separate `test_db`.
2. Run **Migrations** to set up the schema.
3. Seed the database with "dummy" data.
4. Wipe the data after the tests finish.
</TabItem>
<TabItem value="env" label="🔑 Environment Variables">
Use a `.env.test` file to point your app to the test database instead of the real one.
</TabItem>
</Tabs>

## Example: Testing a User Service

Let's test if our `UserService` can actually save a user into the database using **Prisma**.

```javascript
import { UserService } from '../services/userService';
import { prisma } from '../lib/prisma';

describe('UserService Integration', () => {

// Clean up the database before each test
beforeEach(async () => {
await prisma.user.deleteMany();
});

test('should successfully create a user in the database', async () => {
const userService = new UserService();
const userData = { name: 'Ajay', email: 'ajay@codeharborhub.com' };

// Act: Call the service that talks to the DB
const newUser = await userService.createUser(userData);

// Assert: Check if it's in the real DB
const dbUser = await prisma.user.findUnique({
where: { email: 'ajay@codeharborhub.com' }
});

expect(dbUser).toBeDefined();
expect(dbUser.name).toBe('Ajay');
});
});
```

## Unit vs. Integration

| Feature | Unit Testing | Integration Testing |
| :--- | :--- | :--- |
| **Scope** | One function | Multiple modules |
| **Dependencies** | Mocked (Fake) | Real (DB, APIs) |
| **Speed** | Milliseconds | Seconds |
| **Debugging** | Easy (Know exactly where) | Harder (Could be the DB, Config, or Code) |

## Summary Checklist

* [x] I understand that integration tests check the "interaction" between modules.
* [x] I know that I should use a dedicated **Test Database**.
* [x] I understand that integration tests catch bugs that unit tests miss (like schema mismatches).
* [x] I know how to use `beforeEach` to keep my test database clean.

:::warning Don't Overdo It!
Because integration tests are slower, don't try to test every single "if/else" condition here. Use **Unit Tests** for the logic and **Integration Tests** just to ensure the connection works!
:::
Original file line number Diff line number Diff line change
@@ -0,0 +1,67 @@
---
sidebar_position: 1
title: Introduction to Testing
sidebar_label: "1. Why We Test"
description: "Understand the mindset of software testing and why it is the most important habit of a professional developer."
---

Imagine you are building a bridge. You wouldn't wait until the bridge is finished to see if it can hold a car, right? You would test every bolt, every beam, and every cable **during** the build.

In Software Engineering, **Testing** is the process of verifying that your code behaves exactly as you intended. At **CodeHarborHub**, we follow one simple rule: **"If it's not tested, it's already broken."**

## 🧐 The "Confidence" Factor

Why do we spend 30% of our time writing tests?

1. **Fearless Refactoring:** Want to change your code to make it cleaner? If you have tests, you'll know instantly if you broke something.
2. **Documentation:** A test tells other developers (and your future self) exactly how a function is supposed to work.
3. **Cost Savings:** Finding a bug while coding costs **`$1`**. Finding that same bug after it's live costs **`$1,000`** in lost users and emergency fixes.

## The Testing Pyramid

Not all tests are created equal. A professional strategy looks like a pyramid:

### 1. Unit Tests (The Base)
These test the smallest "units" of code (like a single function).
* **Speed:** ⚡ Lightning fast (thousands per second).
* **Cost:** 💰 Very cheap to write.
* **Example:** Testing if a `validateEmail()` function returns `false` for `"invalid-email"`.

### 2. Integration Tests (The Middle)
These test how different parts of your app work together.
* **Speed:** 🐢 Slower (requires a database or an API).
* **Focus:** Does the `User Service` correctly save a user to the `Database`?

### 3. E2E / Functional Tests (The Top)
These test the entire "End-to-End" journey of a user.
* **Speed:** 🐌 Very slow (simulates a real browser/user).
* **Example:** "A user signs up, receives a welcome email, and can log in."

## Manual vs. Automated Testing

At **CodeHarborHub**, we move away from manual clicking and toward **Automated Scripts**.

| Feature | Manual Testing | Automated Testing |
| :--- | :--- | :--- |
| **Execution** | Human-driven (Slow) | Machine-driven (Fast) |
| **Reliability** | Prone to human error | Consistent every time |
| **Cost** | High (Time = Money) | Low (Initial setup only) |
| **Regression** | Hard to repeat | Runs on every "Git Push" |

## The Developer's Toolbox

To start testing in the Node.js ecosystem, you will encounter these terms:

* **Test Runner:** The engine that finds and runs your tests (e.g., **Jest**, **Vitest**, **Mocha**).
* **Assertion Library:** The language used to define success (e.g., `expect(result).toBe(true)`).
* **Mocks/Stubs:** "Fake" versions of real services (like a fake Payment Gateway) so you don't spend real money during tests.

## Summary Checklist
* [x] I understand that testing provides a "Safety Net" for my code.
* [x] I can explain why Unit Tests are the foundation of the pyramid.
* [x] I know the difference between Manual and Automated testing.
* [x] I understand that catching bugs early saves time and money.

:::tip Mindset Shift
Don't think of testing as "finding bugs." Think of it as **defining requirements**. If the test passes, your requirement is met. If it fails, your code hasn't finished its job yet.
:::
Original file line number Diff line number Diff line change
@@ -0,0 +1,88 @@
---
sidebar_position: 6
title: "Mocking & Stubs"
sidebar_label: "6. Mocking & Stubs"
description: "Learn how to fake external dependencies like APIs and Databases to keep your tests fast and reliable."
---

In a real-world application like **CodeHarborHub**, your code doesn't live in a bubble. It talks to:
* 📧 Email Services (SendGrid/Nodemailer)
* 💳 Payment Gateways (Stripe/Razorpay)
* ☁️ Cloud Storage (AWS S3)
* 🌐 External APIs (GitHub/Google)

If you use the **real** services during testing, your tests will be slow, they might cost you money, and they will fail if the internet goes down. We solve this by using **Mocks** and **Stubs**.

## 🧐 What’s the Difference?

While people often use these terms interchangeably, there is a technical difference:

| Concept | Simple Definition | Analogy |
| :--- | :--- | :--- |
| **Stub** | A "dumb" object that returns a hardcoded value. | A pre-recorded voicemail message. |
| **Mock** | A "smart" object that records *how* it was called. | A spy who reports back: "The target called me twice at 5:00 PM." |

## When to use Mocking?

You should mock any dependency that is **non-deterministic** (unpredictable) or **external**:

1. **Network Requests:** Don't hit a real URL; mock the response.
2. **Time:** If a feature only works on weekends, "mock" the system clock to be a Saturday.
3. **Randomness:** If a function generates a random ID, mock it to always return `123`.
4. **Costly Actions:** Mocking the "Send Email" function so you don't spam real users during testing.

## Mocking with Jest

Let's say we have a function that sends a "Course Completion" email to a student.

### The Service

```javascript title="emailService.js"
export const sendWelcomeEmail = async (email) => {
// Imagine this calls a real API like SendGrid
const response = await fetch('[https://api.sendgrid.com/v3/send](https://api.sendgrid.com/v3/send)', { ... });
return response.ok;
};
```

### The Test
We want to test our `signup` logic without actually sending an email.

```javascript title="auth.test.js"
import * as emailService from './emailService';
import { signupUser } from './auth';

// 1. Tell Jest to "hijack" the email service
jest.mock('./emailService');

test('signup should call the email service', async () => {
// 2. Setup the "Mock" to return a successful value
emailService.sendWelcomeEmail.mockResolvedValue(true);

const result = await signupUser('ajay@example.com');

// 3. Assert: Check if the function was CALLED
expect(emailService.sendWelcomeEmail).toHaveBeenCalledTimes(1);
expect(emailService.sendWelcomeEmail).toHaveBeenCalledWith('ajay@example.com');
expect(result.success).toBe(true);
});
```

## The Dangers of Over-Mocking

Mocking is powerful, but if you mock **everything**, your tests become useless.

* **Bad:** Mocking your own internal database logic in an *Integration Test*. (You want to test the real DB!)
* **Good:** Mocking the Stripe API in a *Unit Test*. (You don't want to charge a real card!)

> **Rule of Thumb:** Mock the things you don't control (3rd party APIs). Don't mock the things you do control (your own logic).

## Summary Checklist
* [x] I understand that Mocks replace "Real-world" unpredictable services.
* [x] I know that **Stubs** provide data, while **Mocks** verify behavior.
* [x] I can use `jest.mock()` to fake a module.
* [x] I understand that over-mocking can lead to tests that pass even when the app is broken.

:::success 🎉 Testing Module Complete!
Congratulations! You've learned how to build a professional testing suite. From **Unit Tests** to **Mocks**, you now have the tools to build a robust, industrial-level backend for **CodeHarborHub**.
:::
Loading
Loading