Skip to content
Open
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
Original file line number Diff line number Diff line change
Expand Up @@ -5,9 +5,6 @@ object_relationships:
- name: complexityEnum
using:
foreign_key_constraint_on: default_complexity
- name: department
using:
foreign_key_constraint_on: department_id
- name: lengthEnum
using:
foreign_key_constraint_on: default_length
Expand Down Expand Up @@ -39,13 +36,6 @@ array_relationships:
table:
name: chatbot_domain
schema: public
- name: organization_chatbots
using:
foreign_key_constraint_on:
column: chatbot_id
table:
name: organization_chatbot
schema: public
- name: prompts
using:
foreign_key_constraint_on:
Expand All @@ -66,6 +56,7 @@ select_permissions:
columns:
- avatar
- chatbot_id
- chatbot_id
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Duplicate column chatbot_id appears twice in the select permissions. This appears to be an accidental duplication that should be removed.

Suggested change
- chatbot_id

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Remove duplicate column entry.

The chatbot_id column appears twice in the anonymous role's select permissions (lines 58 and 59). This duplication is redundant and may cause metadata validation warnings.

🔎 Proposed fix
       columns:
         - avatar
         - chatbot_id
-        - chatbot_id
         - created_by
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
- chatbot_id
columns:
- avatar
- chatbot_id
- created_by
🤖 Prompt for AI Agents
In apps/hasura/metadata/databases/masterbots/tables/public_chatbot.yaml around
line 59, the anonymous role's select permissions include a duplicated
"chatbot_id" column entry; remove the second occurrence (the entry on line 59)
so each column appears only once in the columns list, then save and run Hasura
metadata validation to ensure no warnings remain.

- created_by
- default_complexity
- default_length
Expand All @@ -84,20 +75,20 @@ select_permissions:
- role: moderator
permission:
columns:
- disabled
- is_pro
- pro_exclusive
- avatar
- chatbot_id
- department_id
- order
- avatar
- created_by
- default_complexity
- default_length
- default_tone
- default_type
- department_id
- description
- disabled
- name
- order
filter: {}
allow_aggregations: true
comment: ""
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
table:
name: n8n_credentials
schema: public
Comment on lines +1 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Critical: Add strict permissions for credential data.

This table stores n8n credentials, which are highly sensitive. This MUST have proper permissions before merging to any non-development environment:

  1. Strict row-level security: Users should ONLY access their own credentials
  2. Object relationship to public.user via the user_id foreign key
  3. Limited column access: Consider whether all credential fields should be readable
  4. Audit logging: Consider tracking access to credential data
🔎 Example with strict security controls
table:
  name: n8n_credentials
  schema: public
object_relationships:
  - name: user
    using:
      foreign_key_constraint_on: user_id
select_permissions:
  - role: user
    permission:
      columns:
        - id
        - user_id
        - provider
        - service
        - n8n_credential_id
        - created_at
      filter:
        user_id:
          _eq: X-Hasura-User-Id
insert_permissions:
  - role: user
    permission:
      check:
        user_id:
          _eq: X-Hasura-User-Id
      columns:
        - provider
        - service
        - n8n_credential_id
delete_permissions:
  - role: user
    permission:
      filter:
        user_id:
          _eq: X-Hasura-User-Id
🤖 Prompt for AI Agents
In apps/hasura/metadata/databases/masterbots/tables/public_n8n_credentials.yaml
lines 1-3: this table holds sensitive n8n credentials and needs strict security
before non-dev deployment—enable row-level security, add an object_relationship
named "user" using foreign_key_constraint_on: user_id, and add
select/insert/delete permissions scoped to role "user" that filter/check user_id
equals X-Hasura-User-Id; restrict select columns to only non-secret fields
(e.g., id, user_id, provider, service, n8n_credential_id, created_at), restrict
insert columns to only allowed writable fields, and restrict delete to the same
user filter; additionally ensure any remaining secret columns are excluded from
select and consider adding audit logging/triggers for access events.

Original file line number Diff line number Diff line change
Expand Up @@ -24,8 +24,6 @@ insert_permissions:
columns:
- deep_expertise
- favorite
- font_size
- lang
- preferred_complexity
- preferred_length
- preferred_tone
Expand All @@ -35,16 +33,10 @@ insert_permissions:
comment: ""
- role: user
permission:
check:
user_id:
_eq: X-Hasura-User-Id
set:
user_id: X-Hasura-User-Id
check: {}
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The check: {} permission allows any authenticated user to insert preferences for any user, bypassing user isolation. This is a security vulnerability. The original permission with user_id check should be restored to ensure users can only create their own preferences.

Suggested change
check: {}
check:
user_id:
_eq: X-Hasura-User-Id

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical security vulnerability: Empty check allows privilege escalation.

The empty check: {} for the user role permits any authenticated user to insert preferences with any user_id, effectively allowing users to create or modify preferences for other users.

This should enforce row-level security:

check:
  user_id:
    _eq: X-Hasura-User-Id
🔎 Proposed fix
   - role: user
     permission:
-      check: {}
+      check:
+        user_id:
+          _eq: X-Hasura-User-Id
       columns:
🤖 Prompt for AI Agents
In apps/hasura/metadata/databases/masterbots/tables/public_preference.yaml
around line 36, the `check: {}` for the `user` role allows any authenticated
user to insert or modify rows for arbitrary `user_id`; replace the empty check
with a row-level security condition that enforces user_id equals the Hasura
session variable (X-Hasura-User-Id) for inserts/updates (e.g. set check to
require user_id _eq to the X-Hasura-User-Id session variable), ensuring the
comparison type matches the column type (cast the session var if needed) and
keep the rest of the permission block intact.

columns:
- deep_expertise
- favorite
- font_size
- lang
- preferred_complexity
- preferred_length
- preferred_tone
Comment on lines +36 to 42
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🚨 issue (security): Removing the user_id-based row filter exposes all preferences across users to each other.

With check: {} (and no set.user_id), any authenticated user can now insert/select preference rows for any user_id, not just their own. If these are intended to remain per-user preferences, we should reinstate the row-level constraint tied to X-Hasura-User-Id or enforce equivalent restrictions elsewhere.

Expand Down Expand Up @@ -99,9 +91,7 @@ select_permissions:
- preferred_type
- user_id
- web_search
filter:
user_id:
_eq: X-Hasura-User-Id
filter: {}
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The filter: {} permission allows any authenticated user to read all preferences from all users, which is a privacy violation. The original permission with user_id filter should be restored to ensure users can only access their own preferences.

Suggested change
filter: {}
filter:
user_id:
_eq: X-Hasura-User-Id

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical security vulnerability: Empty filter exposes all user preferences.

The empty filter: {} for the user role allows any authenticated user to query all preferences across all users, leaking sensitive user data.

This should enforce row-level security to restrict users to their own preferences.

🔎 Proposed fix
   - role: user
     permission:
       columns:
         - deep_expertise
         - favorite
         - font_size
         - lang
         - preference_id
         - preferred_complexity
         - preferred_length
         - preferred_tone
         - preferred_type
         - user_id
         - web_search
-      filter: {}
+      filter:
+        user_id:
+          _eq: X-Hasura-User-Id
     comment: ""
🤖 Prompt for AI Agents
In apps/hasura/metadata/databases/masterbots/tables/public_preference.yaml
around line 94, the permission filter is currently empty (filter: {}), exposing
all preferences; replace it with a row-level filter that restricts results to
the logged-in user by comparing the preference's user_id to the Hasura session
variable (for example: set filter to {"user_id": {"_eq": "X-Hasura-User-Id"}} or
to the correct JWT claim/key your app uses, e.g. "x-hasura-user-id"), ensuring
the permission only returns rows where user_id equals the session user id.

comment: ""
update_permissions:
- role: moderator
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,6 @@ insert_permissions:
- is_blocked
- is_pro
- is_public
- metadata
- model
- parent_thread_id
- short_link
Expand All @@ -63,7 +62,6 @@ insert_permissions:
- is_blocked
- is_pro
- is_public
- metadata
- model
- parent_thread_id
- short_link
Expand Down
Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
table:
name: user_oauth_connections
schema: public
Comment on lines +1 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add permissions and relationships with privacy controls.

This table stores OAuth connection data, which is sensitive user information. Before merging, you should add:

  1. Row-level security: Users should only access their own OAuth connections
  2. Object relationship to public.user via the user_id foreign key
  3. Restricted permissions: Carefully control which roles can select/insert/update/delete
🔎 Example with privacy controls
table:
  name: user_oauth_connections
  schema: public
object_relationships:
  - name: user
    using:
      foreign_key_constraint_on: user_id
select_permissions:
  - role: user
    permission:
      columns:
        - id
        - user_id
        - provider
        - service
        - scopes
        - status
        - connected_at
        - revoked_at
      filter:
        user_id:
          _eq: X-Hasura-User-Id
insert_permissions:
  - role: user
    permission:
      check:
        user_id:
          _eq: X-Hasura-User-Id
      columns:
        - provider
        - service
        - scopes
        - status
update_permissions:
  - role: user
    permission:
      columns:
        - status
        - revoked_at
      filter:
        user_id:
          _eq: X-Hasura-User-Id
🤖 Prompt for AI Agents
In
apps/hasura/metadata/databases/masterbots/tables/public_user_oauth_connections.yaml
around lines 1-3, the table currently lacks row-level security, object
relationship to public.user, and role-restricted permissions; add an
object_relationship mapping on user_id to public.user, enable/select row-level
security policies so users can only access their own rows (filters using
X-Hasura-User-Id), and add select/insert/update (and delete if needed)
permission entries for the user role that explicitly list allowed columns, use
filters like user_id: {_eq: X-Hasura-User-Id} for selects/updates and checks for
inserts, and restrict update columns to only safe fields (e.g., status,
revoked_at) while preventing exposing sensitive columns.

Original file line number Diff line number Diff line change
@@ -0,0 +1,3 @@
table:
name: user_workflows
schema: public
Comment on lines +1 to +3
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Consider adding permissions and relationships before merging.

This metadata file only defines the table name and schema. For a production-ready table, consider adding:

  1. Select permissions for relevant roles (user, anonymous, moderator)
  2. Object relationship to public.user via the user_id foreign key
  3. Insert/update/delete permissions as appropriate for the user role

Given this is marked [WIP], these can be added in a follow-up commit before merging.

🔎 Example permissions and relationships
table:
  name: user_workflows
  schema: public
object_relationships:
  - name: user
    using:
      foreign_key_constraint_on: user_id
select_permissions:
  - role: user
    permission:
      columns:
        - id
        - user_id
        - workflow_name
        - workflow_id
        - service
        - folder_path
        - created_at
      filter:
        user_id:
          _eq: X-Hasura-User-Id
      allow_aggregations: true
🤖 Prompt for AI Agents
In apps/hasura/metadata/databases/masterbots/tables/public_user_workflows.yaml
around lines 1-3, the table metadata only declares name and schema; add
production-ready permissions and relationships: define object_relationships
linking user_id to public.user, add select_permissions for roles (user,
anonymous, moderator) with column lists and filters (e.g., user role filter
user_id = X-Hasura-User-Id), and add insert/update/delete permissions for
appropriate roles; ensure permission blocks follow Hasura metadata structure and
reference correct columns and allow_aggregations where needed.

6 changes: 3 additions & 3 deletions apps/hasura/metadata/databases/masterbots/tables/tables.yaml
Original file line number Diff line number Diff line change
Expand Up @@ -4,16 +4,14 @@
- "!include public_chatbot_category.yaml"
- "!include public_chatbot_domain.yaml"
- "!include public_complexity_enum.yaml"
- "!include public_department.yaml"
- "!include public_domain_enum.yaml"
- "!include public_example.yaml"
- "!include public_length_enum.yaml"
- "!include public_message.yaml"
- "!include public_message_type_enum.yaml"
- "!include public_models.yaml"
- "!include public_models_enum.yaml"
- "!include public_organization.yaml"
- "!include public_organization_chatbot.yaml"
- "!include public_n8n_credentials.yaml"
- "!include public_preference.yaml"
- "!include public_prompt.yaml"
- "!include public_prompt_chatbot.yaml"
Expand All @@ -28,4 +26,6 @@
- "!include public_tone_enum.yaml"
- "!include public_type_enum.yaml"
- "!include public_user.yaml"
- "!include public_user_oauth_connections.yaml"
- "!include public_user_token.yaml"
- "!include public_user_workflows.yaml"
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
DROP TABLE "public"."user_workflows";
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("created_at"));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question (bug_risk): The uniqueness constraints on user_id and created_at may be overly restrictive for workflows.

UNIQUE ("user_id") limits each user to a single row, which conflicts with the table name and presence of workflow_id/workflow_name (implying multiple workflows per user). UNIQUE ("created_at") is also a fragile global constraint on timestamps. If multiple workflows per user are intended, consider a composite key such as UNIQUE ("user_id", "workflow_id") or dropping these unique constraints entirely.

Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unique constraint on user_id means each user can only have one workflow, which seems overly restrictive for a workflows table. Users typically need to create multiple workflows. This should either be removed or changed to a composite unique constraint like UNIQUE (user_id, workflow_id) if you want to prevent duplicate workflow associations.

Suggested change
CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("created_at"));
CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id", "workflow_id"), UNIQUE ("created_at"));

Copilot uses AI. Check for mistakes.
Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unique constraint on created_at in the user_workflows table is problematic. Using a timestamp as a unique constraint can cause insertion failures if multiple workflows are created at the same time (which is possible within the same millisecond). Consider removing this constraint or using a composite unique constraint if you need to prevent duplicate workflows per user.

Suggested change
CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("created_at"));
CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: UNIQUE constraints severely limit functionality.

Two critical constraint issues:

  1. UNIQUE ("user_id") - Limits each user to only ONE workflow total, which defeats the purpose of a workflows table.
  2. UNIQUE ("created_at") - Prevents multiple workflows from being created at the same timestamp (even by different users), which will cause frequent insertion failures.

These should be replaced with UNIQUE ("user_id", "workflow_id") to allow multiple workflows per user while preventing duplicate workflow IDs for the same user.

🔎 Proposed fix
-CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("created_at"));
+CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL DEFAULT gen_random_uuid(), "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL DEFAULT now(), "updated_at" timestamptz NOT NULL DEFAULT now(), PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE cascade, UNIQUE ("user_id", "workflow_id"));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("created_at"));
CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL DEFAULT gen_random_uuid(), "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL DEFAULT now(), "updated_at" timestamptz NOT NULL DEFAULT now(), PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE cascade, UNIQUE ("user_id", "workflow_id"));
🤖 Prompt for AI Agents
In
apps/hasura/migrations/masterbots/1766065373919_create_table_public_user_workflows/up.sql
around line 1, the table definition wrongly adds UNIQUE("user_id") and
UNIQUE("created_at") which prevent multiple workflows per user and block inserts
with identical timestamps; remove those two UNIQUE constraints and instead add a
composite UNIQUE constraint UNIQUE("user_id", "workflow_id") so a user can have
many workflows but cannot have duplicate workflow_id entries for the same user;
update the CREATE TABLE statement to drop UNIQUE("user_id") and
UNIQUE("created_at") and include UNIQUE("user_id", "workflow_id").

🛠️ Refactor suggestion | 🟠 Major

Add timestamp defaults and updated_at column.

Missing defaults for id and created_at require manual value insertion. An updated_at column is also needed to track modifications to workflow metadata.

🔎 Proposed improvements
-CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("created_at"));
+CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL DEFAULT gen_random_uuid(), "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL DEFAULT now(), "updated_at" timestamptz NOT NULL DEFAULT now(), PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("user_id", "workflow_id"));

Consider adding a trigger for automatic updated_at updates:

CREATE TRIGGER set_updated_at
BEFORE UPDATE ON public.user_workflows
FOR EACH ROW
EXECUTE FUNCTION public.set_current_timestamp_updated_at();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("created_at"));
CREATE TABLE "public"."user_workflows" ("id" uuid NOT NULL DEFAULT gen_random_uuid(), "user_id" uuid NOT NULL, "workflow_name" text NOT NULL, "workflow_id" text NOT NULL, "service" text NOT NULL, "folder_path" text NOT NULL, "created_at" timestamptz NOT NULL DEFAULT now(), "updated_at" timestamptz NOT NULL DEFAULT now(), PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("user_id", "workflow_id"));
🤖 Prompt for AI Agents
In
apps/hasura/migrations/masterbots/1766065373919_create_table_public_user_workflows/up.sql
lines 1-1, add sensible defaults and an updated_at column: set "id" to DEFAULT
uuid_generate_v4() (ensure the uuid extension is enabled or use
gen_random_uuid()), set "created_at" to DEFAULT now(), add "updated_at"
timestamptz NOT NULL DEFAULT now(), and remove redundant UNIQUE on "id" (PK
covers it); optionally keep or revisit UNIQUE on "user_id" if you intend to
allow only one workflow per user. Also add a BEFORE UPDATE trigger that calls
public.set_current_timestamp_updated_at() to auto-update updated_at (create or
reuse the function as needed).

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
DROP TABLE "public"."user_oauth_connections";
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL, "revoked_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

suggestion: revoked_at being NOT NULL and a global UNIQUE on user_id may not align with typical OAuth connection lifecycles.

Two points to reconsider:

  • revoked_at is NOT NULL while status can be 'connected', which forces a revocation time even for active connections. Allowing revoked_at to be NULL for active connections may better match the lifecycle.
  • UNIQUE ("user_id") restricts each user to a single connection. If you expect multiple providers/services per user, a composite key like UNIQUE ("user_id", "provider", "service") would avoid that limitation.
    If the invariant really is “exactly one connection per user with both timestamps always set”, consider adding a CHECK constraint tying status to the timestamp fields to make that explicit in the schema.

Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unique constraint on user_id means each user can only have one OAuth connection, which limits the system to a single provider/service per user. If users should be able to connect multiple services (e.g., Gmail and Google Drive), consider using a composite unique constraint like UNIQUE (user_id, provider, service) instead.

Suggested change
CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL, "revoked_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL, "revoked_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id", "provider", "service")));

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🛠️ Refactor suggestion | 🟠 Major

Add timestamp defaults and updated_at column.

The connected_at column lacks a default value. Additionally, tracking when connection details (like scopes or status) change requires an updated_at column.

🔎 Proposed improvements
-CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL, "revoked_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
+CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL DEFAULT gen_random_uuid(), "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL DEFAULT now(), "revoked_at" timestamptz, "updated_at" timestamptz NOT NULL DEFAULT now(), PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
🤖 Prompt for AI Agents
In
apps/hasura/migrations/masterbots/1766066591567_create_table_public_user_oauth_connections/up.sql
around line 1, the connected_at column has no default and there's no updated_at
column; alter the CREATE TABLE to set connected_at timestamptz NOT NULL DEFAULT
now() and add updated_at timestamptz NOT NULL DEFAULT now(); optionally add a DB
trigger or application logic to set updated_at on row updates if you want
automatic updates (but at minimum add the column with a default).

⚠️ Potential issue | 🔴 Critical

Critical constraint: UNIQUE user_id prevents multiple OAuth connections.

The UNIQUE ("user_id") constraint allows only one OAuth connection per user across all providers and services. This is severely limiting—users cannot connect both Google and another provider, or even multiple Google services.

This should likely be a composite unique constraint on (user_id, provider, service) to allow multiple connections:

🔎 Proposed fix
-CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL, "revoked_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
+CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL DEFAULT gen_random_uuid(), "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL DEFAULT now(), "revoked_at" timestamptz, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE cascade, UNIQUE ("user_id", "provider", "service"));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL, "revoked_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL DEFAULT gen_random_uuid(), "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL DEFAULT now(), "revoked_at" timestamptz, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE cascade, UNIQUE ("user_id", "provider", "service"));
🤖 Prompt for AI Agents
In
apps/hasura/migrations/masterbots/1766066591567_create_table_public_user_oauth_connections/up.sql
around line 1, the migration defines UNIQUE("user_id") which wrongly prevents a
user from having multiple OAuth connections; remove that single-column unique
constraint and replace it with a composite unique constraint on (user_id,
provider, service) so a user can have multiple provider/service connections but
still avoid duplicate entries for the same provider+service combination.

⚠️ Potential issue | 🟠 Major

Incorrect nullability: revoked_at should be nullable.

The revoked_at column is defined as NOT NULL, but it should only be populated when a connection is revoked. For active connections, this field should be NULL.

This forces inserting a placeholder value (like a distant past/future date) for active connections, which is an anti-pattern.

🔎 Proposed fix
-CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL, "revoked_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
+CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL DEFAULT now(), "revoked_at" timestamptz, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL, "revoked_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
CREATE TABLE "public"."user_oauth_connections" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "scopes" text NOT NULL, "status" text NOT NULL, "connected_at" timestamptz NOT NULL DEFAULT now(), "revoked_at" timestamptz, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"));
🤖 Prompt for AI Agents
In
apps/hasura/migrations/masterbots/1766066591567_create_table_public_user_oauth_connections/up.sql
around line 1, the revoked_at column is declared as NOT NULL but should be
nullable; update the CREATE TABLE statement to remove the NOT NULL from
revoked_at so it becomes nullable, and if applying to an existing DB include a
migration step to ALTER TABLE ... ALTER COLUMN revoked_at DROP NOT NULL and
ensure any existing placeholder values are corrected to NULL as appropriate.

Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
DROP TABLE "public"."n8n_credentials";
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@
CREATE TABLE "public"."n8n_credentials" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "n8n_credential_id" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("n8n_credential_id"));
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

question: Uniqueness on user_id enforces a single n8n credential per user, which might be limiting.

This constraint allows only one n8n_credentials row per user. If users may need multiple credentials (e.g., per provider/service), consider a composite uniqueness such as UNIQUE ("user_id", "provider", "service") instead, unless a single global credential per user is explicitly required.

Copy link

Copilot AI Dec 22, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The unique constraint on user_id restricts each user to a single N8N credential across all services, which is too restrictive. Users may need separate credentials for different services (Gmail, Slack, etc.). Consider using a composite unique constraint like UNIQUE (user_id, service) or UNIQUE (user_id, provider, service) instead.

Suggested change
CREATE TABLE "public"."n8n_credentials" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "n8n_credential_id" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("n8n_credential_id"));
CREATE TABLE "public"."n8n_credentials" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "n8n_credential_id" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id", "provider", "service"), UNIQUE ("n8n_credential_id"));

Copilot uses AI. Check for mistakes.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

UNIQUE constraint on user_id limits users to one n8n credential.

The UNIQUE ("user_id") constraint prevents users from having multiple n8n credentials. This is overly restrictive if users need credentials for multiple services or multiple credentials for the same service.

Consider removing this constraint or making it a composite unique key with service if you want to allow one credential per service per user:

UNIQUE ("user_id", "service")
🔎 Proposed fix
-CREATE TABLE "public"."n8n_credentials" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "n8n_credential_id" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("n8n_credential_id"));
+CREATE TABLE "public"."n8n_credentials" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "n8n_credential_id" text NOT NULL, "created_at" timestamptz NOT NULL DEFAULT now(), "updated_at" timestamptz NOT NULL DEFAULT now(), PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE cascade, UNIQUE ("n8n_credential_id"), UNIQUE ("user_id", "service"));
🤖 Prompt for AI Agents
In
apps/hasura/migrations/masterbots/1766066981705_create_table_public_n8n_credentials/up.sql
around line 1, the current UNIQUE("user_id") constraint prevents a user from
having multiple n8n credentials; remove the standalone UNIQUE("user_id") or
replace it with a composite unique constraint such as
UNIQUE("user_id","service") to allow multiple credentials per user while still
enforcing one credential per service per user; update the CREATE TABLE statement
to drop the single-column unique and add the composite unique (or omit
uniqueness entirely) and ensure any downstream references or migrations are
adjusted accordingly.

🛠️ Refactor suggestion | 🟠 Major

Add default timestamp and updated_at column.

The created_at column lacks a default value, requiring manual timestamp insertion on every insert. Additionally, there's no updated_at column to track modifications.

🔎 Proposed improvements
-CREATE TABLE "public"."n8n_credentials" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "n8n_credential_id" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("n8n_credential_id"));
+CREATE TABLE "public"."n8n_credentials" ("id" uuid NOT NULL DEFAULT gen_random_uuid(), "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "n8n_credential_id" text NOT NULL, "created_at" timestamptz NOT NULL DEFAULT now(), "updated_at" timestamptz NOT NULL DEFAULT now(), PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("user_id"), UNIQUE ("n8n_credential_id"));

Also consider adding a trigger to automatically update updated_at:

CREATE TRIGGER set_updated_at
BEFORE UPDATE ON public.n8n_credentials
FOR EACH ROW
EXECUTE FUNCTION public.set_current_timestamp_updated_at();
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
CREATE TABLE "public"."n8n_credentials" ("id" uuid NOT NULL, "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "n8n_credential_id" text NOT NULL, "created_at" timestamptz NOT NULL, PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("id"), UNIQUE ("user_id"), UNIQUE ("n8n_credential_id"));
CREATE TABLE "public"."n8n_credentials" ("id" uuid NOT NULL DEFAULT gen_random_uuid(), "user_id" uuid NOT NULL, "provider" text NOT NULL, "service" text NOT NULL, "n8n_credential_id" text NOT NULL, "created_at" timestamptz NOT NULL DEFAULT now(), "updated_at" timestamptz NOT NULL DEFAULT now(), PRIMARY KEY ("id") , FOREIGN KEY ("user_id") REFERENCES "public"."user"("user_id") ON UPDATE restrict ON DELETE restrict, UNIQUE ("user_id"), UNIQUE ("n8n_credential_id"));
🤖 Prompt for AI Agents
In
apps/hasura/migrations/masterbots/1766066981705_create_table_public_n8n_credentials/up.sql
around line 1, the CREATE TABLE lacks a default for created_at and omits an
updated_at column; modify the table definition so created_at has a default of
now() and add an updated_at timestamptz column also defaulting to now() (both
NOT NULL), and after the table creation add a trigger (or trigger creation
statement) to set updated_at on each UPDATE using the existing
public.set_current_timestamp_updated_at() function (or create that function if
missing).

42 changes: 42 additions & 0 deletions apps/web/app/api/automations/gmail/disable.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,42 @@
import { getHasuraClient, getUserWorkflowByService } from '@/services/hasura'
import type { NextApiRequest, NextApiResponse } from 'next'

export default async function handler(
req: NextApiRequest,
res: NextApiResponse,
) {
try {
const { userId, jwt } = req.body
if (!userId) return res.status(400).json({ error: 'Missing userId' })

const client = getHasuraClient({ jwt })

const userWorkflows = await getUserWorkflowByService(userId, 'gmail', jwt)
const workflowId = userWorkflows?.workflow_id

if (userWorkflows) {
return res.status(200).json({
message: 'Workflow already enabled',
workflowId: userWorkflows.workflow_id,
})
}

if (!userWorkflows) {
return res.status(400).json({ error: 'No workflow found for this user' })
}

// Delete the workflow record from Hasura
await client.mutation({
deleteUserWorkflows: {
__args: { where: { workflowId: { _eq: workflowId } } },
affected_rows: true,
},
})

res.status(200).json({ message: 'Workflow disabled successfully' })
Comment on lines +14 to +36
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical logic error: disable endpoint never actually disables.

The conditional logic is inverted and makes the delete operation unreachable:

  1. Line 17: If userWorkflows exists → returns 200 "Workflow already enabled" (wrong message for disable)
  2. Line 24: If userWorkflows doesn't exist → returns 400
  3. Lines 28-34: Delete mutation is unreachable since all code paths return before it

This means the disable endpoint can never actually delete a workflow.

🔎 Proposed fix
 export default async function handler(
 	req: NextApiRequest,
 	res: NextApiResponse,
 ) {
 	try {
 		const { userId, jwt } = req.body
 		if (!userId) return res.status(400).json({ error: 'Missing userId' })

 		const client = getHasuraClient({ jwt })

 		const userWorkflows = await getUserWorkflowByService(userId, 'gmail', jwt)
-		const workflowId = userWorkflows?.workflow_id
-
-		if (userWorkflows) {
-			return res.status(200).json({
-				message: 'Workflow already enabled',
-				workflowId: userWorkflows.workflow_id,
-			})
-		}

 		if (!userWorkflows) {
 			return res.status(400).json({ error: 'No workflow found for this user' })
 		}

+		const workflowId = userWorkflows.workflow_id
+
 		// Delete the workflow record from Hasura
 		await client.mutation({
 			deleteUserWorkflows: {
-				__args: { where: { workflowId: { _eq: workflowId } } },
+				__args: { where: { workflow_id: { _eq: workflowId } } },
 				affected_rows: true,
 			},
 		})

 		res.status(200).json({ message: 'Workflow disabled successfully' })
🤖 Prompt for AI Agents
In apps/web/app/api/automations/gmail/disable.ts around lines 14 to 36, the
conditional logic is inverted so the delete code never runs: change the flow to
return 400 if no userWorkflows, and otherwise perform the delete and then return
a 200 success; remove the early return that says "Workflow already enabled" and
instead run the client.mutation when userWorkflows exists, then return a clear
"Workflow disabled successfully" response; also ensure you use the correct
workflow ID variable/DB column name in the delete where clause (match
workflow_id vs workflowId).

// biome-ignore lint/suspicious/noExplicitAny: <explanation>
} catch (error: any) {
console.error('Error disabling Gmail workflow:', error)
res.status(500).json({ error: error.message || 'Internal server error' })
}
}
83 changes: 83 additions & 0 deletions apps/web/app/api/automations/gmail/enable.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,83 @@
import { duplicateN8nWorkflow } from '@/lib/n8n'
import {
getUserCredentialByService,
getUserWorkflowByService,
insertUserWorkflow,
} from '@/services/hasura'
import type { NextApiRequest, NextApiResponse } from 'next'

const N8N_TEMPLATE_WORKFLOW_ID =
process.env.N8N_TEMPLATE_WORKFLOW_ID ||
(() => {
throw new Error('N8N_TEMPLATE_WORKFLOW_ID is not defined')
})()
const N8N_MASTERBOTS_EMAIL_FOLDER_ID =
process.env.N8N_MASTERBOTS_EMAIL_FOLDER_ID ||
(() => {
throw new Error('N8N_MASTERBOTS_EMAIL_FOLDER_ID is not defined')
})()
const N8N_WEBHOOK_BASE_URL =
process.env.N8N_WEBHOOK_BASE_URL ||
(() => {
throw new Error('N8N_WEBHOOK_BASE_URL is not defined')
})()

export default async function handler(
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Use named export instead of default export.

The coding guidelines specify: "Avoid default exports; prefer named exports."

🔎 Proposed fix
-export default async function handler(
+export async function handler(
 	req: NextApiRequest,
 	res: NextApiResponse,
 ) {

Based on coding guidelines.

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In apps/web/app/api/automations/gmail/enable.ts around line 25, the handler is
exported as a default export; change it to a named export (export async function
handler(...)) and update any imports or references to use the named export
(import { handler } from '...') so code follows the "no default exports"
guideline; ensure TypeScript/server framework routing (if it relies on default)
is adjusted to accept the named export or add an explicit re-export where
necessary.

req: NextApiRequest,
res: NextApiResponse,
) {
try {
const { userId, jwt } = req.body
if (!userId) return res.status(400).json({ error: 'Missing userId' })

const existingWorkflow = await getUserWorkflowByService(
userId,
'gmail',
jwt,
)
if (existingWorkflow) {
return res.status(200).json({
message: 'Workflow already enabled',
workflowId: existingWorkflow.workflow_id,
})
}

const existingCredential = await getUserCredentialByService(
userId,
'gmail',
jwt,
)
if (!existingCredential) {
return res.status(400).json({
error:
'User has not connected Gmail OAuth yet. Please connect via /api/oauth/google/start',
})
}

const workflow = await duplicateN8nWorkflow(
userId,
existingCredential.n8n_credential_id,
N8N_TEMPLATE_WORKFLOW_ID,
N8N_MASTERBOTS_EMAIL_FOLDER_ID,
N8N_WEBHOOK_BASE_URL,
)

await insertUserWorkflow(
userId,
workflow.id,
workflow.name,
'gmail',
'Masterbots/Users Workflows/Email',
jwt,
)
Comment on lines +65 to +72
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

Critical: Parameter order mismatch causes runtime error.

The call to insertUserWorkflow passes parameters in the wrong order. The function signature in hasura.service.ts (line 2297) expects jwt as the first parameter, but you're passing it last.

🔎 Proposed fix
 		await insertUserWorkflow(
+			jwt,
 			userId,
 			workflow.id,
 			workflow.name,
 			'gmail',
 			'Masterbots/Users Workflows/Email',
-			jwt,
 		)
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
await insertUserWorkflow(
userId,
workflow.id,
workflow.name,
'gmail',
'Masterbots/Users Workflows/Email',
jwt,
)
await insertUserWorkflow(
jwt,
userId,
workflow.id,
workflow.name,
'gmail',
'Masterbots/Users Workflows/Email',
)
🤖 Prompt for AI Agents
In apps/web/app/api/automations/gmail/enable.ts around lines 65-72, the call to
insertUserWorkflow passes arguments in the wrong order (jwt is last) while the
hasura.service.ts signature expects jwt as the first parameter; reorder the call
so jwt is passed first followed by userId, workflow.id, workflow.name, 'gmail',
'Masterbots/Users Workflows/Email' to match the function signature.


res.status(200).json({
message: 'Workflow enabled successfully',
webhookUrl: workflow.webhookUrl,
})
// biome-ignore lint/suspicious/noExplicitAny: <explanation>
} catch (error: any) {
console.error('Error enabling Gmail workflow:', error)
res.status(500).json({ error: error.message || 'Internal server error' })
}
}
36 changes: 36 additions & 0 deletions apps/web/app/api/automations/gmail/execute.ts
Original file line number Diff line number Diff line change
@@ -0,0 +1,36 @@
import { getHasuraClient, getUserWorkflowByService } from '@/services/hasura'
import type { NextApiRequest, NextApiResponse } from 'next'
import fetch from 'node-fetch'

export default async function handler(
req: NextApiRequest,
res: NextApiResponse,
) {
try {
const { userId, payload, jwt } = req.body
if (!userId) return res.status(400).json({ error: 'Missing userId' })

const userWorkflows = await getUserWorkflowByService(userId, 'gmail', jwt)

if (!userWorkflows) {
return res.status(400).json({ error: 'No workflow found for this user' })
}

const webhookUrl = `${process.env.N8N_WEBHOOK_BASE_URL}/webhook/${userWorkflows.workflow_id}`

// 2️⃣ Trigger workflow via POST
const triggerRes = await fetch(webhookUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload || {}),
})

const triggerData = await triggerRes.json()

res.status(200).json({ message: 'Workflow executed', data: triggerData })
Comment on lines +22 to +30
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Missing error handling for webhook trigger response.

The code assumes the webhook call succeeds without checking triggerRes.ok. If the n8n webhook returns an error status, this should be surfaced to the caller.

🔎 Proposed fix
 		// 2️⃣ Trigger workflow via POST
 		const triggerRes = await fetch(webhookUrl, {
 			method: 'POST',
 			headers: { 'Content-Type': 'application/json' },
 			body: JSON.stringify(payload || {}),
 		})

 		const triggerData = await triggerRes.json()
+		if (!triggerRes.ok) {
+			return res.status(triggerRes.status).json({
+				error: 'Failed to trigger workflow',
+				details: triggerData,
+			})
+		}

 		res.status(200).json({ message: 'Workflow executed', data: triggerData })
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
const triggerRes = await fetch(webhookUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload || {}),
})
const triggerData = await triggerRes.json()
res.status(200).json({ message: 'Workflow executed', data: triggerData })
const triggerRes = await fetch(webhookUrl, {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify(payload || {}),
})
const triggerData = await triggerRes.json()
if (!triggerRes.ok) {
return res.status(triggerRes.status).json({
error: 'Failed to trigger workflow',
details: triggerData,
})
}
res.status(200).json({ message: 'Workflow executed', data: triggerData })
🤖 Prompt for AI Agents
In apps/web/app/api/automations/gmail/execute.ts around lines 22 to 30, the code
posts to the webhook but does not check triggerRes.ok or handle fetch errors;
update the implementation to check triggerRes.ok and, if false, read the
response body (await triggerRes.text() or json when possible) and return a
non-200 response to the caller (e.g., res.status(triggerRes.status).json({
message: 'Webhook error', status: triggerRes.status, details: <body> })), and
wrap the fetch in try/catch to handle network/throwing errors and return a
502/500 with the error message if the fetch itself fails.

// biome-ignore lint/suspicious/noExplicitAny: <explanation>
} catch (error: any) {
console.error('Error executing Gmail workflow:', error)
res.status(500).json({ error: error.message || 'Internal server error' })
}
}
Loading