Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
The table of contents is too big for display.
Diff view
Diff view
  •  
  •  
  •  
Original file line number Diff line number Diff line change
Expand Up @@ -58,6 +58,7 @@ const FunctionLink = memo(function FunctionLink({
*/
onClick={(e) => {
e.preventDefault()
menuState.setMenuActiveRefId(id)
history.pushState({}, '', url)
const reduceMotion = window.matchMedia('(prefers-reduced-motion: reduce)').matches
document.getElementById(slug)?.scrollIntoView({
Expand Down
3 changes: 3 additions & 0 deletions apps/docs/content/guides/api/rest/client-libs.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,10 +18,13 @@ Supabase provides client libraries for the REST and Realtime APIs. Some librarie

## Community libraries

{/* supa-mdx-lint-disable Rule003Spelling */}

| `Language` | `Source Code` | `Documentation` |
| ----------------------- | -------------------------------------------------------------------------------- | ------------------------------------------- |
| C# | [supabase-csharp](https://github.com/supabase-community/supabase-csharp) | [Docs](/docs/reference/csharp/introduction) |
| Go | [supabase-go](https://github.com/supabase-community/supabase-go) | |
| Kotlin | [supabase-kt](https://github.com/supabase-community/supabase-kt) | [Docs](/docs/reference/kotlin/introduction) |
| Ruby | [supabase-rb](https://github.com/supabase-community/supabase-rb) | |
| Godot Engine (GDScript) | [supabase-gdscript](https://github.com/supabase-community/godot-engine.supabase) | |
| Elixir | [supabase-elixir](https://github.com/supabase-community/supabase-ex) | |
2 changes: 1 addition & 1 deletion apps/docs/content/guides/platform/compute-and-disk.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -173,5 +173,5 @@ As mentioned in the Postgres [documentation](https://postgresqlco.nf/doc/en/para

### Constraints

- You can modify disk attributes up to **four times** within a rolling 24-hour window. A new modification can be initiated as soon as the previous one completes. If you reach this limit, you will encounter throttling and must wait for the rolling 24-hour window to permit further adjustments.
- After **any** disk attribute change, there is a cooldown period of approximately six hours before you can make further adjustments. During this time, no changes are allowed. If you encounter throttling, you’ll need to wait until the cooldown period concludes before making additional modifications.
- You can increase disk size but cannot decrease it.
7 changes: 2 additions & 5 deletions apps/docs/content/guides/platform/database-size.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -71,7 +71,6 @@ Vacuum operations can temporarily increase resource utilization, which may adver
</Admonition>

Supabase projects have automatic vacuuming enabled, which ensures that these operations are performed regularly to keep the database healthy and performant.

It is possible to [fine-tune](https://www.percona.com/blog/2018/08/10/tuning-autovacuum-in-postgresql-and-autovacuum-internals/) the [autovacuum parameters](https://www.enterprisedb.com/blog/postgresql-vacuum-and-analyze-best-practice-tips), or [manually initiate](https://www.postgresql.org/docs/current/sql-vacuum.html) vacuum operations.
Running a manual vacuum after deleting large amounts of data from your DB could help reduce the database size reported by Postgres.

Expand All @@ -87,9 +86,7 @@ Supabase uses network-attached storage to balance performance with scalability.

Projects on the Pro Plan and higher have auto-scaling disks.

Disk size expands automatically when the database reaches 90% of the allocated disk size. The disk is expanded to be 50% larger (for example, 8 GB -> 12 GB).

Auto-scaling is limited to four modifications within a rolling 24-hour window. While a new modification can be initiated immediately after the previous one completes, reaching the daily quota of four resizes will prevent further scaling until the rolling window allows it. If you reach 95% disk utilization and have exhausted your modification quota, your project will enter read-only mode.
Disk size expands automatically when the database reaches 90% of the allocated disk size. The disk is expanded to be 50% larger (for example, 8 GB -> 12 GB). Auto-scaling can only take place once every 6 hours. If within those 6 hours you reach 95% of the disk space, your project will enter read-only mode.

<Admonition type="note">

Expand All @@ -103,7 +100,7 @@ Disk size can also be manually expanded on the [Database Settings page](/dashboa

You may want to import a lot of data into your database which requires multiple disk expansions. for example, uploading more than 1.5x the current size of your database storage will put your database into [read-only mode](#read-only-mode). If so, it is highly recommended you increase the disk size manually on the [Database Settings page](/dashboard/project/_/database/settings).

Due to restrictions on the underlying cloud provider, disk modifications are limited to four operations within a rolling 24-hour window. While a new modification can be initiated as soon as the previous one completes, you will be unable to make further adjustments if you reach this daily quota until the rolling 24-hour window permits it.
Due to restrictions on the underlying cloud provider, disk expansions can occur only once every six hours. During the six hour cool down window, the disk cannot be resized again.

</Admonition>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Since the wraparound prevention autovacuum cannot be stopped, the best approach
2. **Increase Disk Throughput/IOPS:**
- **Action:** If disk I/O utilization is also consistently high (e.g., near 100%), consider temporarily increasing your disk's provisioned IOPS and throughput.
- **Why it helps:** Autovacuum is an I/O-intensive operation, involving a lot of reading and writing. Higher disk performance can significantly speed up the process.
- **Considerations:** Cloud providers may limit the number of disk modifications (e.g., up to four in 24 hours). You can start a new modification immediately after the previous one finishes, provided you have not exceeded the daily quota.
- **Considerations:** Cloud providers often have limitations, such as a cooldown period (e.g., 6 hours) between disk modification operations.

### **Monitoring progress and future prevention**

Expand Down
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
---
title = "How to bypass modification limits"
title = "How to bypass cooldown period"
topics = [
"platform"
]
keywords = ["modification limits", "disk resize"]
keywords = ["cooldown", "disk resize"]
---

This limit is not a Supabase limitation. It is rooted in how Amazon EBS (the underlying storage for our databases) manages volume modifications. AWS allows up to four modifications per volume within a rolling 24-hour window. While you can start a new modification immediately after a previous one completes, AWS enforces a quota that prevents a fifth modification within that 24-hour period to ensure data integrity and volume stability.
This cooldown period isn't a Supabase limitation. It's rooted in how Amazon EBS (the underlying storage instance for our databases) manages volume modifications. After modifying a volume (e.g. increasing size, changing type, or IOPS), AWS enforces a mandatory 6-hour cooldown before allowing another modification on the same volume. This is to ensure data integrity and stability of the volume under load.

From the [**AWS docs**](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyVolume.html):

> After you initiate a volume modification, you must wait for that modification to reach the completed state before you can initiate another modification for the same volume. You can modify a volume up to four times within a rolling 24-hour period, as long as the volume is in the in-use or available state, and all previous modifications for that volume are completed. If you exceed this limit, you get an error message that indicates when you can perform your next modification.
> After modifying a volume, you must wait at least six hours and ensure that the volume is in the in-use or available state before you can modify the same volume. This is sometimes referred to as a cooldown period.”

There are a few options to work around this limit if you have already used your four modifications and need to make further adjustments:
There are a few options to work around the cooldown, depending on the state of your database:

1. **Restore to a new project**: This spins up a new instance with a new disk, bypassing the the 24-hour quota entirely. It’s a great option if you're okay with a new project and project refactoring. [**Docs: restoring to a new project**](/docs/guides/platform/backups#restore-to-a-new-project).
2. **pg_upgrade**: Our [**pg_upgrade**](/docs/guides/platform/upgrading) implementation migrates your data to a new disk, which resets the modification count. The main requirement here is that the database must be operational - it can't run it if your DB is in a degraded or inaccessible state.
1. **Restore to a new project**: This spins up a new instance with a new disk, bypassing the cooldown entirely. It’s a great option if you're okay with a new project and project refactoring. [**Docs: restoring to a new project**](/docs/guides/platform/backups#restore-to-a-new-project).
2. **pg_upgrade**: Our [**pg_upgrade**](/docs/guides/platform/upgrading) implementation migrates your data to a new disk, which skips the cooldown. The main requirement here is that the database must be operational - it can't run it if your DB is in a degraded or inaccessible state.
3. **Pause and Restore**: This also migrates to a new disk but is only available for projects on the Free plan. If you're not on the Free plan, you'd need to [**transfer your project to an organization on the Free plan**](/docs/guides/platform/project-transfer) first.

If the database is down or locked in a bad state (e.g corrupted or stuck during resize), and you have hit the modification limit, the only path forward is to wait until the rolling 24-hour window allows for another modification.
If the database is down or locked in a bad state (e.g corrupted or stuck during resize), the only path forward is to wait until the cooldown expires and the disk resize job completes in the queue.

More on this in our doc here: [**https://supabase.com/docs/guides/platform/database-size#disk-size**](/docs/guides/platform/database-size#disk-size).
72 changes: 69 additions & 3 deletions apps/docs/features/docs/Reference.navigation.client.tsx
Original file line number Diff line number Diff line change
Expand Up @@ -7,7 +7,16 @@ import { ChevronUp } from 'lucide-react'
import Link from 'next/link'
import { usePathname } from 'next/navigation'
import type { HTMLAttributes, MouseEvent, PropsWithChildren } from 'react'
import { createContext, useCallback, useContext, useEffect, useMemo, useRef, useState } from 'react'
import {
createContext,
useCallback,
useContext,
useEffect,
useMemo,
useRef,
useState,
useSyncExternalStore,
} from 'react'

import { cn } from 'ui'

Expand All @@ -17,6 +26,63 @@ import { BASE_PATH } from '~/lib/constants'

export const ReferenceContentInitiallyScrolledContext = createContext<boolean>(false)

let patchCount = 0
let originalPushState: typeof history.pushState | null = null
let originalReplaceState: typeof history.replaceState | null = null
const pathnameListeners = new Set<() => void>()

function notifyPathnameListeners() {
pathnameListeners.forEach((callback) => callback())
}

function subscribeToPathname(callback: () => void) {
pathnameListeners.add(callback)

if (patchCount === 0) {
window.addEventListener('popstate', notifyPathnameListeners)

originalPushState = history.pushState.bind(history)
history.pushState = (...args) => {
originalPushState!(...args)
notifyPathnameListeners()
}

originalReplaceState = history.replaceState.bind(history)
history.replaceState = (...args) => {
originalReplaceState!(...args)
notifyPathnameListeners()
}
}
patchCount++

return () => {
pathnameListeners.delete(callback)
patchCount--

if (patchCount === 0) {
window.removeEventListener('popstate', notifyPathnameListeners)
history.pushState = originalPushState!
history.replaceState = originalReplaceState!
originalPushState = null
originalReplaceState = null
}
}
}

function getPathname() {
if (typeof window === 'undefined') return ''
const pathname = window.location.pathname
return pathname.startsWith(BASE_PATH) ? pathname.slice(BASE_PATH.length) : pathname
}

function getServerPathname() {
return ''
}

function useCurrentPathname() {
return useSyncExternalStore(subscribeToPathname, getPathname, getServerPathname)
}

export function ReferenceContentScrollHandler({
libPath,
version,
Expand Down Expand Up @@ -184,7 +250,7 @@ export function RefLink({
}) {
const ref = useRef<HTMLAnchorElement>(null)

const pathname = usePathname()
const pathname = useCurrentPathname()
const href = deriveHref(basePath, section)
const isActive =
pathname === href || (pathname === basePath && href.replace(basePath, '') === '/introduction')
Expand Down Expand Up @@ -230,7 +296,7 @@ export function RefLink({
function useCompoundRefLinkActive(basePath: string, section: AbbrevApiReferenceSection) {
const [open, _setOpen] = useState(false)

const pathname = usePathname()
const pathname = useCurrentPathname()
const parentHref = deriveHref(basePath, section)
const isParentActive = pathname === parentHref

Expand Down
Binary file removed apps/docs/public/img/supabase-auth-cover.png
Binary file not shown.
10 changes: 5 additions & 5 deletions apps/docs/spec/cli_v1_commands.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -1917,9 +1917,9 @@ commands:
summary: Show statistics related to vacuum operations per table
description: |2

This shows you stats about the vacuum activities for each table. Due to Postgres' [MVCC](https://www.postgresql.org/docs/current/mvcc.html) when data is updated or deleted new rows are created and old rows are made invisible and marked as "dead tuples". Usually the [autovaccum](https://supabase.com/docs/guides/platform/database-size#vacuum-operations) process will aysnchronously clean the dead tuples.
This shows you stats about the vacuum activities for each table. Due to Postgres' [MVCC](https://www.postgresql.org/docs/current/mvcc.html) when data is updated or deleted new rows are created and old rows are made invisible and marked as "dead tuples". Usually the [autovacuum](https://supabase.com/docs/guides/platform/database-size#vacuum-operations) process will asynchronously clean the dead tuples.

The command lists when the last vacuum and last auto vacuum took place, the row count on the table as well as the count of dead rows and whether autovacuum is expected to run or not. If the number of dead rows is much higher than the row count, or if an autovacuum is expected but has not been performed for some time, this can indicate that autovacuum is not able to keep up and that your vacuum settings need to be tweaked or that you require more compute or disk IOPS to allow autovaccum to complete.
The command lists when the last vacuum and last auto vacuum took place, the row count on the table as well as the count of dead rows and whether autovacuum is expected to run or not. If the number of dead rows is much higher than the row count, or if an autovacuum is expected but has not been performed for some time, this can indicate that autovacuum is not able to keep up and that your vacuum settings need to be tweaked or that you require more compute or disk IOPS to allow autovacuum to complete.


```
Expand Down Expand Up @@ -2296,9 +2296,9 @@ commands:
Estimates space allocated to a relation that is full of dead tuples
description: |2

This command displays an estimation of table "bloat" - Due to Postgres' [MVCC](https://www.postgresql.org/docs/current/mvcc.html) when data is updated or deleted new rows are created and old rows are made invisible and marked as "dead tuples". Usually the [autovaccum](https://supabase.com/docs/guides/platform/database-size#vacuum-operations) process will asynchronously clean the dead tuples. Sometimes the autovaccum is unable to work fast enough to reduce or prevent tables from becoming bloated. High bloat can slow down queries, cause excessive IOPS and waste space in your database.
This command displays an estimation of table "bloat" - Due to Postgres' [MVCC](https://www.postgresql.org/docs/current/mvcc.html) when data is updated or deleted new rows are created and old rows are made invisible and marked as "dead tuples". Usually the [autovacuum](https://supabase.com/docs/guides/platform/database-size#vacuum-operations) process will asynchronously clean the dead tuples. Sometimes the autovacuum is unable to work fast enough to reduce or prevent tables from becoming bloated. High bloat can slow down queries, cause excessive IOPS and waste space in your database.

Tables with a high bloat ratio should be investigated to see if there are vacuuming is not quick enough or there are other issues.
Tables with a high bloat ratio should be investigated to see if vacuuming is not quick enough or if there are other issues.

```
TYPE │ SCHEMA NAME │ OBJECT NAME │ BLOAT │ WASTE
Expand Down Expand Up @@ -3258,7 +3258,7 @@ commands:

By default, all schemas in the target database are diffed. Use the `--schema public,extensions` flag to restrict diffing to a subset of schemas.

While the diff command is able to capture most schema changes, there are cases where it is known to fail. Currently, this could happen if you schema contains:
While the diff command is able to capture most schema changes, there are cases where it is known to fail. Currently, this could happen if your schema contains:

- Changes to publication
- Changes to storage buckets
Expand Down
Loading
Loading