Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
2 changes: 1 addition & 1 deletion DEVELOPERS.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ To ensure a positive and inclusive environment, please read our [code of conduct
You will need to install and configure the following dependencies on your machine to build [Supabase](https://supabase.com):

- [Git](https://git-scm.com/)
- [Node.js v22.x or higher](https://nodejs.org)
- [Node.js](https://nodejs.org) version as documented in [.nvmrc](./.nvmrc)
- [pnpm](https://pnpm.io/) version 9.x.x or higher
- [make](https://www.gnu.org/software/make/) or the equivalent to `build-essentials` for your OS
- [Docker](https://docs.docker.com/get-docker/) (to run studio locally)
Expand Down
14 changes: 9 additions & 5 deletions apps/docs/content/guides/auth/social-login/auth-twitter.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -18,7 +18,7 @@ provider will be deprecated in future releases.
Setting up X / Twitter logins for your application consists of 3 parts:

- Create and configure an X Project and App on the [X Developer Dashboard](https://developer.x.com/en/portal/dashboard).
- Add your X `API Key` and `API Secret Key` to your [Supabase Project](/dashboard).
- Add your X OAuth 2.0 `Client ID` and `Client Secret` to your [Supabase Project](/dashboard).
- Add the login code to your [Supabase JS Client App](https://github.com/supabase/supabase-js).

## Access your X developer account
Expand All @@ -37,8 +37,7 @@ Setting up X / Twitter logins for your application consists of 3 parts:
- Select your use case, click `Next`.
- Enter a description for your project, click `Next`.
- Enter a name for your app, click `Next`.
- Copy and save your `API Key` (this is your `client_id`).
- Copy and save your `API Secret Key` (this is your `client_secret`).
- Copy and save your **API Key** and **API Secret Key** (these are used for OAuth 1.0a, which is being deprecated).
- Click on `App settings` to proceed to next steps.
- At the bottom, you will find `User authentication settings`. Click on `Set up`.
- Under `User authentication settings`, you can configure `App permissions`.
Expand All @@ -50,6 +49,11 @@ Setting up X / Twitter logins for your application consists of 3 parts:
- Enter your `Terms of service URL`.
- Enter your `Privacy policy URL`.
- Click `Save`.
- After saving, navigate to `Keys and tokens` on your App page.
- Scroll to the bottom of the page and copy your **Client ID**.
- Click the `Regenerate` button next to **Client Secret**.
- In the confirmation modal, click `Yes, regenerate`.
- Copy and save your **Client Secret**.

## Enter your X credentials into your Supabase project

Expand All @@ -68,8 +72,8 @@ curl -X PATCH "https://api.supabase.com/v1/projects/$PROJECT_REF/config/auth" \
-H "Content-Type: application/json" \
-d '{
"external_x_enabled": true,
"external_x_client_id": "your-x-api-key",
"external_x_secret": "your-x-api-secret-key"
"external_x_client_id": "your-x-client-id",
"external_x_secret": "your-x-client-secret"
}'
```

Expand Down
2 changes: 1 addition & 1 deletion apps/docs/content/guides/platform/compute-and-disk.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -173,5 +173,5 @@ As mentioned in the Postgres [documentation](https://postgresqlco.nf/doc/en/para

### Constraints

- After **any** disk attribute change, there is a cooldown period of approximately six hours before you can make further adjustments. During this time, no changes are allowed. If you encounter throttling, you’ll need to wait until the cooldown period concludes before making additional modifications.
- You can modify disk attributes up to **four times** within a rolling 24-hour window. A new modification can be initiated as soon as the previous one completes. If you reach this limit, you will encounter throttling and must wait for the rolling 24-hour window to permit further adjustments.
- You can increase disk size but cannot decrease it.
7 changes: 5 additions & 2 deletions apps/docs/content/guides/platform/database-size.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -71,6 +71,7 @@ Vacuum operations can temporarily increase resource utilization, which may adver
</Admonition>

Supabase projects have automatic vacuuming enabled, which ensures that these operations are performed regularly to keep the database healthy and performant.

It is possible to [fine-tune](https://www.percona.com/blog/2018/08/10/tuning-autovacuum-in-postgresql-and-autovacuum-internals/) the [autovacuum parameters](https://www.enterprisedb.com/blog/postgresql-vacuum-and-analyze-best-practice-tips), or [manually initiate](https://www.postgresql.org/docs/current/sql-vacuum.html) vacuum operations.
Running a manual vacuum after deleting large amounts of data from your DB could help reduce the database size reported by Postgres.

Expand All @@ -86,7 +87,9 @@ Supabase uses network-attached storage to balance performance with scalability.

Projects on the Pro Plan and higher have auto-scaling disks.

Disk size expands automatically when the database reaches 90% of the allocated disk size. The disk is expanded to be 50% larger (for example, 8 GB -> 12 GB). Auto-scaling can only take place once every 6 hours. If within those 6 hours you reach 95% of the disk space, your project will enter read-only mode.
Disk size expands automatically when the database reaches 90% of the allocated disk size. The disk is expanded to be 50% larger (for example, 8 GB -> 12 GB).

Auto-scaling is limited to four modifications within a rolling 24-hour window. While a new modification can be initiated immediately after the previous one completes, reaching the daily quota of four resizes will prevent further scaling until the rolling window allows it. If you reach 95% disk utilization and have exhausted your modification quota, your project will enter read-only mode.

<Admonition type="note">

Expand All @@ -100,7 +103,7 @@ Disk size can also be manually expanded on the [Database Settings page](/dashboa

You may want to import a lot of data into your database which requires multiple disk expansions. for example, uploading more than 1.5x the current size of your database storage will put your database into [read-only mode](#read-only-mode). If so, it is highly recommended you increase the disk size manually on the [Database Settings page](/dashboard/project/_/database/settings).

Due to restrictions on the underlying cloud provider, disk expansions can occur only once every six hours. During the six hour cool down window, the disk cannot be resized again.
Due to restrictions on the underlying cloud provider, disk modifications are limited to four operations within a rolling 24-hour window. While a new modification can be initiated as soon as the previous one completes, you will be unable to make further adjustments if you reach this daily quota until the rolling 24-hour window permits it.

</Admonition>

Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Since the wraparound prevention autovacuum cannot be stopped, the best approach
2. **Increase Disk Throughput/IOPS:**
- **Action:** If disk I/O utilization is also consistently high (e.g., near 100%), consider temporarily increasing your disk's provisioned IOPS and throughput.
- **Why it helps:** Autovacuum is an I/O-intensive operation, involving a lot of reading and writing. Higher disk performance can significantly speed up the process.
- **Considerations:** Cloud providers often have limitations, such as a cooldown period (e.g., 6 hours) between disk modification operations.
- **Considerations:** Cloud providers may limit the number of disk modifications (e.g., up to four in 24 hours). You can start a new modification immediately after the previous one finishes, provided you have not exceeded the daily quota.

### **Monitoring progress and future prevention**

Expand Down
Original file line number Diff line number Diff line change
@@ -1,23 +1,23 @@
---
title = "How to bypass cooldown period"
title = "How to bypass modification limits"
topics = [
"platform"
]
keywords = ["cooldown", "disk resize"]
keywords = ["modification limits", "disk resize"]
---

This cooldown period isn't a Supabase limitation. It's rooted in how Amazon EBS (the underlying storage instance for our databases) manages volume modifications. After modifying a volume (e.g. increasing size, changing type, or IOPS), AWS enforces a mandatory 6-hour cooldown before allowing another modification on the same volume. This is to ensure data integrity and stability of the volume under load.
This limit is not a Supabase limitation. It is rooted in how Amazon EBS (the underlying storage for our databases) manages volume modifications. AWS allows up to four modifications per volume within a rolling 24-hour window. While you can start a new modification immediately after a previous one completes, AWS enforces a quota that prevents a fifth modification within that 24-hour period to ensure data integrity and volume stability.

From the [**AWS docs**](https://docs.aws.amazon.com/AWSEC2/latest/APIReference/API_ModifyVolume.html):

> After modifying a volume, you must wait at least six hours and ensure that the volume is in the in-use or available state before you can modify the same volume. This is sometimes referred to as a cooldown period.”
> After you initiate a volume modification, you must wait for that modification to reach the completed state before you can initiate another modification for the same volume. You can modify a volume up to four times within a rolling 24-hour period, as long as the volume is in the in-use or available state, and all previous modifications for that volume are completed. If you exceed this limit, you get an error message that indicates when you can perform your next modification.
There are a few options to work around the cooldown, depending on the state of your database:
There are a few options to work around this limit if you have already used your four modifications and need to make further adjustments:

1. **Restore to a new project**: This spins up a new instance with a new disk, bypassing the cooldown entirely. It’s a great option if you're okay with a new project and project refactoring. [**Docs: restoring to a new project**](/docs/guides/platform/backups#restore-to-a-new-project).
2. **pg_upgrade**: Our [**pg_upgrade**](/docs/guides/platform/upgrading) implementation migrates your data to a new disk, which skips the cooldown. The main requirement here is that the database must be operational - it can't run it if your DB is in a degraded or inaccessible state.
1. **Restore to a new project**: This spins up a new instance with a new disk, bypassing the the 24-hour quota entirely. It’s a great option if you're okay with a new project and project refactoring. [**Docs: restoring to a new project**](/docs/guides/platform/backups#restore-to-a-new-project).
2. **pg_upgrade**: Our [**pg_upgrade**](/docs/guides/platform/upgrading) implementation migrates your data to a new disk, which resets the modification count. The main requirement here is that the database must be operational - it can't run it if your DB is in a degraded or inaccessible state.
3. **Pause and Restore**: This also migrates to a new disk but is only available for projects on the Free plan. If you're not on the Free plan, you'd need to [**transfer your project to an organization on the Free plan**](/docs/guides/platform/project-transfer) first.

If the database is down or locked in a bad state (e.g corrupted or stuck during resize), the only path forward is to wait until the cooldown expires and the disk resize job completes in the queue.
If the database is down or locked in a bad state (e.g corrupted or stuck during resize), and you have hit the modification limit, the only path forward is to wait until the rolling 24-hour window allows for another modification.

More on this in our doc here: [**https://supabase.com/docs/guides/platform/database-size#disk-size**](/docs/guides/platform/database-size#disk-size).
Original file line number Diff line number Diff line change
@@ -1,15 +1,18 @@
import { useParams } from 'common'
import { useBucketsQuery } from 'data/storage/buckets-query'
import { useBucketQuery } from 'data/storage/buckets-query'

export const useSelectedBucket = () => {
const { ref, bucketId } = useParams()

return useBucketsQuery(
{ projectRef: ref },
const query = useBucketQuery(
{
select(data) {
return data.find((b) => b.id === bucketId)
},
projectRef: ref,
bucketId,
},
{
enabled: !!bucketId,
}
)

return query
}
Original file line number Diff line number Diff line change
@@ -1,28 +1,15 @@
import { noop } from 'lodash'
import { useEffect, useState } from 'react'

import { useStorageExplorerStateSnapshot } from '@/state/storage-explorer'
import ConfirmationModal from 'ui-patterns/Dialogs/ConfirmationModal'
import { StorageItem } from '../Storage.types'
import { STORAGE_ROW_TYPES } from '../Storage.constants'

interface ConfirmDeleteModalProps {
visible: boolean
selectedItemsToDelete: StorageItem[]
onSelectCancel: () => void
onSelectDelete: () => void
}

export const ConfirmDeleteModal = ({
visible = false,
selectedItemsToDelete = [],
onSelectCancel = noop,
onSelectDelete = noop,
}: ConfirmDeleteModalProps) => {
export const ConfirmDeleteModal = () => {
const [deleting, setDeleting] = useState(false)
const { selectedItemsToDelete, deleteFolder, deleteFiles, setSelectedItemsToDelete } =
useStorageExplorerStateSnapshot()

useEffect(() => {
setDeleting(false)
}, [visible])

const visible = selectedItemsToDelete.length > 0
const multipleFiles = selectedItemsToDelete.length > 1

const title = multipleFiles
Expand All @@ -37,18 +24,35 @@ export const ConfirmDeleteModal = ({
? `Are you sure you want to delete the selected ${selectedItemsToDelete[0].type.toLowerCase()}?`
: ``

const onConfirmDelete = () => {
setDeleting(true)
onSelectDelete()
const onDeleteSelectedFiles = async () => {
try {
setDeleting(true)
if (
selectedItemsToDelete.length === 1 &&
selectedItemsToDelete[0].type === STORAGE_ROW_TYPES.FOLDER
) {
await deleteFolder(selectedItemsToDelete[0])
} else {
await deleteFiles({ files: selectedItemsToDelete })
}
} catch (err) {
} finally {
setDeleting(false)
}
}

useEffect(() => {
setDeleting(false)
}, [visible])

return (
<ConfirmationModal
size="medium"
visible={visible}
title={<span className="break-words">{title}</span>}
size="medium"
onCancel={onSelectCancel}
onConfirm={onConfirmDelete}
loading={deleting}
onCancel={() => setSelectedItemsToDelete([])}
onConfirm={onDeleteSelectedFiles}
variant="destructive"
alert={{
base: { variant: 'destructive' },
Expand All @@ -58,5 +62,3 @@ export const ConfirmDeleteModal = ({
/>
)
}

export default ConfirmDeleteModal
Original file line number Diff line number Diff line change
Expand Up @@ -14,6 +14,7 @@ export interface FileExplorerProps {
columns: StorageColumn[]
selectedItems: StorageItemWithColumn[]
itemSearchString: string
isLoading?: boolean
onFilesUpload: (event: any, index: number) => void
onSelectAllItemsInColumn: (index: number) => void
onSelectColumnEmptySpace: (index: number) => void
Expand All @@ -24,6 +25,7 @@ export const FileExplorer = ({
columns = [],
selectedItems = [],
itemSearchString,
isLoading = false,
onFilesUpload = noop,
onSelectAllItemsInColumn = noop,
onSelectColumnEmptySpace = noop,
Expand All @@ -32,9 +34,6 @@ export const FileExplorer = ({
const fileExplorerRef = useRef<any>(null)
const snap = useStorageExplorerStateSnapshot()

// [Joshen] StorageExplorer will always have at least 1 column once data is loaded
const hasLoaded = columns.length > 0

useEffect(() => {
if (fileExplorerRef) {
const { scrollWidth, clientWidth } = fileExplorerRef.current
Expand All @@ -56,7 +55,7 @@ export const FileExplorer = ({
<ItemContextMenu id={CONTEXT_MENU_KEYS.STORAGE_ITEM} />
<FolderContextMenu id={CONTEXT_MENU_KEYS.STORAGE_FOLDER} />

{!hasLoaded ? (
{isLoading ? (
<FileExplorerColumn
column={{ id: '', name: '', items: [], status: STORAGE_ROW_STATUS.LOADING }}
/>
Expand Down
Loading
Loading