Skip to content

#2854 - Field For Uploading Files#2875

Open
kodinkat wants to merge 23 commits intoDiscipleTools:developfrom
kodinkat:2854-field-for-uploading-files
Open

#2854 - Field For Uploading Files#2875
kodinkat wants to merge 23 commits intoDiscipleTools:developfrom
kodinkat:2854-field-for-uploading-files

Conversation

@kodinkat
Copy link
Collaborator

@kodinkat kodinkat commented Jan 30, 2026

…eld type with customizable options including accepted file types, max file size, and display settings. Updated related components and admin settings to accommodate new field type, ensuring improved user experience for file uploads.
…mproved code readability by adding spaces around array keys and values, ensuring consistent formatting across multiple files. Updated comments for clarity in the dt-posts-endpoints file.
…on formatting by removing unnecessary spaces, enhancing code readability and consistency across the tab-custom-fields file.
@kodinkat
Copy link
Collaborator Author

@corsacca @cairocoder01 ready for reviewing-ish - Will sanity check some more next week.. :)

…' to 'file_upload' across multiple files for consistency. Enhanced related components and admin settings to support the new naming convention, improving clarity and maintainability of the codebase.
… import references in index files for consistency and improved the key prefix structure in the dt-components.php file to include post ID, enhancing organization and clarity.
…ownloading files associated with posts. The endpoint verifies permissions, retrieves file metadata, generates a presigned URL, and serves the file with appropriate headers. Enhanced error handling for missing parameters and failed fetch attempts.
…ype across multiple files, including activity logging for uploads and deletions. Updated related components to improve user experience and maintainability, ensuring consistent handling of file uploads in the system.
… consistency in file upload and deletion messages across the codebase. Enhanced readability by removing unnecessary whitespace and ensuring uniformity in string interpolation.
…pace to enhance code readability and maintain consistency across the file. This change improves the overall formatting without altering functionality.
@kodinkat
Copy link
Collaborator Author

kodinkat commented Feb 4, 2026

@corsacca @cairocoder01 latest updates ready for review....

…in index files to enhance clarity and maintain uniformity in the component structure. Adjusted the order of exports for better organization without affecting functionality.
…ents in index files to ensure uniformity in the component structure. This change enhances clarity and maintains organization without impacting functionality.
@cairocoder01
Copy link
Collaborator

cairocoder01 commented Feb 18, 2026

Testing just to confirm the front-end and UX:

  • A bit of an edge case, but if I double click to rename a file, make a change, but then hit ESC to try to cancel the change, it seems to go through and do the renaming anyways. When hitting ESC, it should revert back to the existing file name and not make any changes.
  • Is there a way to add a better upload progress indicator that shows some sort of percent completed? I just uploaded a 7.3MB image file that took close to 2min to complete. The spinner showed as expected, but I started questioning if it was stuck. I don't know if there's actually a way to get a progress indication back or not, but that would be ideal so people don't give up too soon and hit refresh or upload the file again. And would we want to disable the upload area while a file upload is in progress?
  • On the list page, if I add a file_upload field to the field list that should show in the table, nothing is displayed. To keep it simple, maybe that just shows the number of files instead of trying to display them.
  • On the list page, there is now way to filter by a file_upload field. Maybe this is ok for now and we do that later. It would be good to be able to filter for records with/without files saved, or maybe also the number of files.
  • In the Customizations admin when editing a file_upload field, the modal window is taller than my viewport and won't scroll down, so I can't properly save changes.
  • When creating a new record and trying to upload a file in the process, the API request to upload it fails. However, it shows the file in the list as if it was successful. Then when I hit save and go to the record, it also shows the file in the list but the thumbnail is broken because the file doesn't exist.
    • Failed API requests should cause the list/thumbs to update accordingly.
    • On the new record page, the API endpoints should not be hooked up and the file content needs to be sent as part of the form submission, where PHP can pick up the file and save it through that process.
image image

I haven't reviewed any of the backend code for its implementation. I'll leave that to @corsacca

… uploads in the new record flow to prevent premature API calls. Added support for displaying file upload counts in modular lists and ensured proper upload handling after record creation. Updated related components for improved user experience and error handling.
…sed on file uploads in the archive template, updated documentation to reflect new filter options, and enhanced JavaScript handling for file upload filters. This improves the user experience by allowing users to search for posts with or without uploaded files.
@kodinkat
Copy link
Collaborator Author

@cairocoder01 @corsacca review ready....

Screenshot 2026-02-20 at 14 34 46 Screenshot 2026-02-20 at 14 35 27

@cairocoder01
Copy link
Collaborator

cairocoder01 commented Feb 23, 2026

  • The upload progress indicator seems to show 100% immediately and doesn't change. If there isn't a good way to show that progress, I think I'd rather not have it at all. Or maybe it just isn't working correctly. I think the server would need to send back some sort of partial response or something to show the progress. That might be hard on a Wordpress site. So if that's not possible, we can remove that idea.

…gs and improving error handling. This enhances code clarity and maintains functionality during file uploads.
@github-actions
Copy link

Code Review: PR #2875 — File Upload Field

Thanks for this feature! Adding a dedicated file_upload field type is a meaningful addition. Here's my review:


Security Concerns

🔴 HTTP Header Injection via Content-Type (dt-posts-endpoints.phpstorage_download())

The Content-Type header is set from the raw S3 response without sanitization:

$content_type = wp_remote_retrieve_header( $response, 'content-type' );
// ...
header( 'Content-Type: ' . $content_type );

If an S3 response (or a misconfigured/malicious bucket) returns a Content-Type value containing a newline (\r\n), this could allow HTTP response header injection. Consider sanitizing before output:

header( 'Content-Type: ' . sanitize_mime_type( $content_type ) );

Or at minimum strip newlines: str_replace( [ "\r", "\n" ], '', $content_type ).

🟡 Missing sanitization on accepted_file_types in the non-REST form path (tab-custom-fields.php)

In the form submission path for process_edit_field, accepted_file_types is split and trimmed but individual values are not sanitized:

$types = array_map( 'trim', explode( ',', $post_submission['accepted_file_types'] ) );
$custom_field['accepted_file_types'] = $types;

The REST endpoint path does apply sanitization. The two paths should be consistent — consider array_map( 'sanitize_text_field', ... ) here as well.


Bugs

🔴 Orphaned thumbnails when deleting files (storage_delete_single())

When a file is deleted, only the primary S3 object is removed:

$result = DT_Storage_API::delete_file( $file_key );

But each file object also stores thumbnail_key and large_thumbnail_key. These are never deleted, leaving orphaned objects in S3. The same issue exists in the bulk storage_delete path. The stored $file_info already has those keys — they should be deleted too:

if ( !empty( $file_info['thumbnail_key'] ) ) {
    DT_Storage_API::delete_file( $file_info['thumbnail_key'] );
}
if ( !empty( $file_info['large_thumbnail_key'] ) ) {
    DT_Storage_API::delete_file( $file_info['large_thumbnail_key'] );
}

Performance Concerns

🟡 Full file body loaded into PHP memory (storage_download())

$response = wp_remote_get( $presigned_url, [ 'timeout' => 300, ... ] );
$file_content = wp_remote_retrieve_body( $response );
echo $file_content;

This loads the entire file into memory before outputting it, which will be problematic for large files and could exhaust the PHP memory limit. I see there's a comment noting a WordPress bug with stream => true and long filenames. It would be worth documenting this limitation prominently (e.g. a comment with the WP bug tracker link), and potentially adding a check against a configurable max download size. For now, at a minimum, consider increasing the memory limit temporarily with wp_raise_memory_limit( 'image' ).


Design Concerns

🟡 exit inside a REST route callback (storage_download())

echo $file_content; // phpcs:ignore ...
exit;

Calling exit inside a REST callback bypasses the entire WordPress REST response stack, including any rest_post_dispatch filters. While this works for the binary download use case, it's non-standard. A cleaner alternative is to hook into rest_pre_serve_request to output the binary payload and return true, which tells WP REST the response has already been served without fully exiting the process.

🟡 Inconsistent boolean handling between form and REST paths

In tab-custom-fields.php, booleans for file_upload options use presence-of-key patterns (checkbox form style). In admin-settings-endpoints.php they use (bool) $value. These two code paths should produce identical data structures in post meta to avoid subtle bugs when the same field config is read back.


Minor / Nitpicks

  • file_upload excluded from hook-based activity logging (class-hook-posts.php): This is intentional to avoid double-logging, but it means direct update_post_meta() calls outside the REST API won't be logged. A short comment explaining why file_upload is excluded would help future maintainers.

  • new-record.js fallback: The fallback from getPendingFilesForUpload() to stagedFiles is a compatibility shim for an older web component API. Once the web component is updated this should be cleaned up.

  • Documentation: The new docs/dt-posts-list-query.md entry for file_upload filters is appreciated. It would also be helpful to add a brief entry to dt-posts-field-settings.md documenting the new field type and its configuration options (accepted_file_types, max_file_size, delete_enabled, download_enabled, etc.).


What Looks Good

  • Permission separation is correct: storage_download only requires can_view, while storage_delete_single and storage_rename_single require can_update.
  • The download endpoint verifies that the requested file_key actually belongs to the post before generating a presigned URL — good IDOR prevention.
  • Using sanitize_file_name() for rename operations is appropriate.
  • The deferred upload logic in new-record.js correctly handles the case where the record doesn't exist yet.
  • esc_sql() is used correctly on $query_key in the new SQL in posts.php.
  • Filter radio-button state is correctly restored when re-opening the filter modal.

Summary: The most important items to address before merging are the header injection issue in storage_download() and the orphaned thumbnails on delete. The memory usage concern for large downloads is worth at least documenting even if a full streaming solution isn't feasible right now.

…in settings and custom fields. Additionally, sanitize Content-Type in post endpoints to prevent HTTP header injection, ensuring improved security and data integrity.
@github-actions
Copy link

Code Review: #2875 - Field For Uploading Files

This is a substantial PR adding a new file_upload field type. The overall architecture is well-designed — proper permission checks, key ownership validation before S3 operations, and activity logging in all the right places. Below is detailed feedback organized by severity.


High Priority

1. No server-side file type or size validation (dt-posts/dt-posts-endpoints.phpstorage_upload())

The accepted_file_types and max_file_size field settings are stored and passed to the <dt-file-upload> web component as client-side hints only. Nothing in storage_upload() checks whether the uploaded MIME type matches the field's configuration or whether the file exceeds max_file_size. A direct REST API call bypasses all restrictions entirely.

Recommendation: After receiving the upload, call wp_check_filetype_and_ext() against the allowed types list from field settings, and verify $uploaded_file['size'] against max_file_size.

2. Entire file buffered in PHP memory for downloads (storage_download())

$file_content = wp_remote_retrieve_body( $response );
header( 'Content-Length: ' . strlen( $file_content ) );
echo $file_content;

This loads the complete file into PHP memory before outputting it. For large files this will hit PHP memory limits. The comment in the code acknowledges the streaming approach was abandoned due to a bug, but no alternative is in place. Consider either: (a) fixing the streaming approach, (b) returning a redirect to the presigned S3 URL (simpler, no memory issue), or (c) at minimum documenting a recommended max_file_size cap.

Also: strlen() can misbehave for binary data when mbstring.func_overload is enabled. Prefer mb_strlen( $file_content, '8bit' ) for binary Content-Length.


Medium Priority

3. Orphaned thumbnails when deleting a file (storage_delete_single())

When a file object is deleted, only the main key is passed to DT_Storage_API::delete_file(). The thumbnail_key and large_thumbnail_key stored in the same file object are never deleted from S3, leaving orphaned objects. The same applies to the bulk storage_delete path.

// Missing after DT_Storage_API::delete_file( $file_key ):
if ( ! empty( $file_object['thumbnail_key'] ) ) {
    DT_Storage_API::delete_file( $file_object['thumbnail_key'] );
}
if ( ! empty( $file_object['large_thumbnail_key'] ) ) {
    DT_Storage_API::delete_file( $file_object['large_thumbnail_key'] );
}

4. Delete endpoint returns success when S3 deletion fails (storage_delete_single())

If DT_Storage_API::delete_file() returns file_deleted = false, the file object stays in $updated_files[] and the meta is updated with the file still present (good — no data corruption). However, the endpoint returns 'deleted' => true, 'deleted_key' => $file_key_to_delete, claiming success. The caller has no way to detect the failure.

5. Potential XSS in admin JS template literal (dt-core/admin/js/dt-settings.js)

file_type_icon is interpolated into a template literal without escaping:

file_type_icon_html = `<img src="${file_type_icon}" class="field-icon" ...>`;

If this value contains " onerror="alert(1) it is an XSS vector in the admin UI. Use createElement or escape the value before interpolation.

6. Content-Disposition filename header injection (storage_download())

header( 'Content-Disposition: attachment; filename="' . esc_attr( $file_name ) . '"' );

esc_attr() is designed for HTML attribute contexts — it does not strip CRLF characters (header injection vector). For HTTP headers, sanitize_file_name() is more appropriate and is already used in storage_rename_single.


Low Priority / Code Quality

7. is_multi_file fragile string comparison

$params['is_multi_file'] === 'true' requires the exact string 'true'. Sending 1 or a boolean silently falls back to single-file mode. Consider filter_var( $params['is_multi_file'] ?? false, FILTER_VALIDATE_BOOLEAN ).

8. 'file'/'files' not translatable (dt-assets/js/modular-list.js)

values = [`${fileCount} ${fileCount === 1 ? 'file' : 'files'}`];

These hardcoded English strings should go through the JS i18n pattern used elsewhere in the codebase.

9. New-record error leaves user stranded (dt-assets/js/new-record.js)

If file uploads fail after the record is created, the page stays in an error state with no redirect to the partially-created record. Consider redirecting to the record and surfacing the error there instead.

10. display_layout not validated against allowed values

sanitize_text_field() is applied but there is no in_array() check against ['grid', 'list'] in either admin-settings-endpoints.php or tab-custom-fields.php.

11. elseif vs else if

WordPress coding standards require elseif (one word). A few instances of } else if ( appear in admin-settings-endpoints.php.

12. Missing absint() on $post_id in storage_delete_single()

$post_id = $params['id'] is used in meta functions without absint(). The route regex ensures it is numeric, but absint() is the expected PHPCS pattern for post IDs.


What's Working Well

  • Ownership validation: storage_delete_single and storage_download verify the file_key exists in the post's own meta before operating — prevents cross-post S3 access.
  • Permission model: Upload/delete/rename require can_update(), download only requires can_view() — correct differentiation.
  • Activity logging: All three new operations log to dt_activity_insert with _x() translated strings and the disciple_tools text domain.
  • Presigned URL regeneration: render_file_upload() regenerates fresh URLs on every render rather than storing them — correct for S3 expiration.
  • sanitize_mime_type() applied to content type in download response.
  • sanitize_file_name() applied in rename.
  • Backward compatibility preserved for single-file mode.

Summary

Severity Count Key Issues
High 2 No server-side file type/size validation; full file buffered in memory for downloads
Medium 4 Orphaned thumbnails on delete; false success on S3 failure; admin JS XSS; header injection
Low 6 Code quality, i18n, and PHPCS concerns

The most important items before merge: server-side file type validation and the download memory issue — both can cause real production problems. The thumbnail deletion bug will silently accumulate orphaned S3 objects and is worth fixing now.

Review generated with Claude Code

@kodinkat
Copy link
Collaborator Author

@cairocoder01 review ready...

@cairocoder01
Copy link
Collaborator

This looks good to me. Ready for @corsacca

When all looks good, I can merge the associated PR on the component library first, publish a release, and then update the version here in this PR before merging.

@corsacca
Copy link
Member

corsacca commented Mar 2, 2026

Thanks guys!

@kodinkat
image
By default we are allowing images and pdfs.
It is unclear how to enable:

  • audio files
  • videos
  • documents (docx, .txt, .md, spreadsheets, etc)

I think we can enable those by default. No?
With a link to somewhere listing out other file types.

File type icons
image

It looks like we are getting the icons working by default:
image

I don't think we need the extra input for making it different.

@corsacca
Copy link
Member

corsacca commented Mar 2, 2026

Otherwise the upload field is looking good! I don't think the above changes require changing the component, so we might be good there. @cairocoder01

@cairocoder01
Copy link
Collaborator

@corsacca Sounds good. I merged the component request and created release v0.8.10 with this new component in it. Available from NPM now.

kodinkat added 3 commits March 4, 2026 13:35
… include various document formats and audio/video types. Refactor related JavaScript and PHP code to improve handling of accepted file types, including the removal of the file type icon feature for a cleaner interface. Update placeholder text and help descriptions for better user guidance on file uploads.
…dability. Adjusted formatting of accepted file types array to enhance code clarity while maintaining existing functionality.
@github-actions
Copy link

github-actions bot commented Mar 4, 2026

Code Review: #2854 — Field For Uploading Files

This is a significant feature PR. The overall approach is solid and the UI looks well thought-out. Below are the issues I found, ordered by severity.


Critical

1. No server-side file type validation (dt-posts/dt-posts-endpoints.php, storage_upload)

The accepted_file_types field setting is only consumed by the browser's <input accept="..."> attribute. Any authenticated user with post-update permission can bypass it entirely by calling the REST endpoint directly and uploading arbitrary file types (.php, .exe, .svg with scripts, etc.). Files go to S3 so .php won't execute there, but the admin's configuration is not honored and the attack surface is still meaningful.

Fix required: Add server-side validation against the field's accepted_file_types setting and max_file_size before calling DT_Storage_API::upload_file(). Use wp_check_filetype_and_ext() (which inspects actual file contents, not just extension) for type verification.


High

2. $file_count overwritten inside the upload loop (storage_upload)

$file_count = is_array( $files['name'] ) ? count( $files['name'] ) : 1;  // outer loop bound

for ( $i = 0; $i < $file_count; $i++ ) {
    // ...
    $file_count = count( $uploaded_file_names );  // OVERWRITES outer $file_count mid-loop
}

This will cause the loop to terminate early when multiple files are uploaded and at least one succeeds, resulting in only the first file being processed. Use a different variable name for the inner count.

3. $_FILES['type'] trusted as MIME type (dt-core/dt-storage.php)

$type = $upload['type'] ?? '';  // Client-supplied, not verified
$client->putObject([
    'ContentType' => $type   // Attacker-controlled MIME type sent to S3
]);

$_FILES['type'] is provided by the browser and can be set to any string. The actual MIME type should be detected server-side using wp_check_filetype_and_ext() or finfo.


Medium

4. delete_enabled / rename_enabled not enforced at the API layer

The REST permission callbacks for storage_delete_single and storage_rename_single only check DT_Posts::can_update(). They do not verify whether the field's delete_enabled or rename_enabled settings are true. These flags are enforced only in the UI. Any user with post-update permission can call the API directly to delete or rename files regardless of admin configuration.

5. meta_key not sanitized in storage_upload and storage_delete_single

storage_rename_single and storage_download correctly sanitize meta_key:

$meta_key = sanitize_text_field( wp_unslash( $params['meta_key'] ) );

But storage_upload and storage_delete_single do not. This should be consistent.

6. esc_attr() used for HTTP Content-Disposition header value (storage_download)

header( 'Content-Disposition: attachment; filename="' . esc_attr( $file_name ) . '"' );

esc_attr() escapes HTML characters but does not strip \r\n, which are the characters needed for HTTP header injection. sanitize_text_field() applied earlier via dt_recursive_sanitize_array strips newlines, so the risk is mitigated in practice — but the escaping at the output point should match the context. Strip \r, \n, and " characters directly before using in the header.

7. storage_download buffers the full file in PHP memory

$file_content = wp_remote_retrieve_body( wp_remote_get( $presigned_url, ['timeout' => 300] ) );

This loads the entire file into PHP memory before sending it to the client. For large files (video, etc.), this will exhaust memory_limit. Consider either:

  • Redirecting the client to the presigned URL directly (simplest and most efficient)
  • Streaming using fopen() + fpassthru() with chunked output

8. storage_delete_single returns deleted => true even when S3 deletion fails

When DT_Storage_API::delete_file() fails, the file is correctly kept in the metadata array — but the function still returns ['deleted' => true, ...]. The response should reflect whether the actual S3 deletion succeeded.


Low / Standards

9. N+1 presigned URL generation per page render (dt-core/utilities/dt-components.php, render_file_upload)

Each file in the list generates 2-3 presigned URL signing operations (main + thumbnails). For a post with many attachments, this can noticeably slow page renders. Consider caching with a short TTL.

10. $wpdb->prepare() not used in dt-posts/posts.php fields_to_sql for file_upload query

AND pm.meta_key = '" . esc_sql( $query_key ) . "'

WordPress Coding Standards require $wpdb->prepare() for queries with variable data. esc_sql() reduces risk but will be flagged by PHPCS.

11. is_multi_file === 'true' fragile string comparison (storage_upload)

If the API is called with a JSON body where is_multi_file is a boolean true (not the string "true"), this comparison silently fails. Use filter_var( $params['is_multi_file'], FILTER_VALIDATE_BOOLEAN ) instead.

12. $icon output without escaping in dt-components.php

The PR changes esc_html( $icon ) to just $icon. If intentional (because $icon is already a constructed HTML attribute string), add a comment explaining the removal, or use wp_kses() with a known attribute allowlist.

13. else if vs elseif

A few places in the PR use } else if ( — WordPress Coding Standards require } elseif ( (no space).


Summary

The most critical item before merging is #1 (server-side file type validation) — the accepted_file_types admin setting is currently a browser hint only. Items #2 (loop variable overwrite) and #3 (client-controlled MIME type) are also bugs/security issues worth addressing before merge. The remaining items are lower-risk but should be cleaned up for code quality.

Great work overall on a complex feature — the field registration, REST endpoint structure, and UI integration all look well thought out.

@github-actions
Copy link

github-actions bot commented Mar 4, 2026

Code Review: #2875 - Field For Uploading Files

Great feature addition! The new file_upload field type is well-structured overall, with good audit trails, consistent escaping in templates, and smart deferral of uploads until after post creation. Here are some issues to address before merging:


🔴 High Priority

1. No server-side MIME type validation (dt-posts/dt-posts-endpoints.php, storage_upload())

The accepted_file_types setting is enforced only by the client-side web component. The server stores the client-supplied $_FILES[*]['type'] directly into post meta without validation:

$file_object = [
    'type' => $uploaded_file['type'],   // client-controlled!
\];

An attacker can spoof this to any MIME type. Recommend using finfo_file() to validate actual file content against a server-side allowlist before calling DT_Storage_API::upload_file().

2. Entire file buffered in memory during download (storage_download())

$file_content = wp_remote_retrieve_body( $response );
echo $file_content;

wp_remote_get() loads the full response body into a PHP variable. For large videos or documents (which are in the accepted types), this will exhaust PHP memory limits. Consider streaming with 'stream' => true + readfile(), or redirecting the client directly to the presigned S3 URL.


🟠 Medium Priority

3. No server-side file size limit enforcement (storage_upload())

The max_file_size field setting is only enforced by the web component attribute. The endpoint never checks $uploaded_file['size'] against the configured limit, so a raw curl request can bypass it. Fetch the field settings in storage_upload() and reject oversized files early.

4. storage_s3_url_duration accepted from client without bounds check

$storage_s3_url_duration = ... $params['storage_s3_url_duration'] ...;
new \DateTimeImmutable( $storage_s3_url_duration ); // only checks parseability

A client can request +100 years, generating effectively permanent presigned URLs. Recommend either removing this parameter (always use the default) or restricting it to an allowlist like ['+1 hour', '+24 hours', '+7 days'].

5. HTTP header injection risk in Content-Disposition (storage_download())

header( 'Content-Disposition: attachment; filename="' . esc_attr( $file_name ) . '"' );

esc_attr() is for HTML attribute contexts, not HTTP headers. If $file_name contains \r\n characters (possible since $_FILES[*]['name'] is stored without header-safe sanitization), this enables response splitting. Recommend:

$safe_name = preg_replace( '/[\r\n]/', '', $file_name );
header( 'Content-Disposition: attachment; filename="' . rawurlencode( $safe_name ) . '"' );

Also apply sanitize_file_name() when storing the uploaded filename to meta.

6. Unconditional delete_post_meta() in storage_delete() array path (data loss risk)

delete_post_meta( $post_id, $meta_key ); // runs even if S3 deletions all failed

If S3 deletions fail (network error, etc.), the post meta is still wiped, creating orphaned S3 objects and losing all file references. Only delete/update meta after confirming the S3 operations succeeded.


🟡 Low Priority

7. Thumbnail keys orphaned on single-file delete (storage_delete_single())

Each file object stores thumbnail_key and large_thumbnail_key, but storage_delete_single() only passes the primary $file_key to DT_Storage_API::delete_file(). The thumbnail objects in S3 are never cleaned up.

8. Presigned URLs generated for all files on every page load (render_file_upload())

// (always regenerate to refresh expired presigned URLs)
$file['url'] = DT_Storage_API::get_file_url( $file_key );

A record with 20 files makes 20+ synchronous S3 API calls on every page render (more with thumbnails). This will degrade page load times noticeably. Consider generating these lazily via JS on demand, or caching with a TTL.

9. strlen() for binary Content-Length header

header( 'Content-Length: ' . strlen( $file_content ) );

Use mb_strlen( $file_content, '8bit' ) to ensure correct byte count for binary data if mbstring.func_overload is active.

10. Inconsistent sanitization in storage_delete_single()

$post_type = $params['post_type'];   // not sanitized
$post_id   = $params['id'];           // not sanitized

Compare with storage_download() which applies sanitize_text_field(). The REST routing regexes (\d+, \w+) mitigate this, but apply consistent sanitization for clarity.

11. display_layout not validated against allowlist

Only sanitize_text_field() is applied; any string is accepted. An explicit allowlist check ('grid', 'list') would be more correct.


✅ Positive Observations

  • storage_download() and storage_delete_single() correctly verify $file_key exists in post meta before acting — good defense against arbitrary S3 access/deletion.
  • sanitize_mime_type() is applied to the Content-Type header in the download endpoint.
  • All PHP template output consistently uses esc_attr(), esc_html(), and esc_url().
  • dt_activity_insert() calls provide good audit trails for upload, delete, and rename operations.
  • Excluding file_upload from automatic activity logging via $ignore_field_types correctly avoids double-logging.
  • New-record flow defers file uploads until after the post is created — prevents orphaned files.

Review generated with Claude Code

…adjusting the placement of PHP closing tags. This change enhances the overall structure of the file without altering existing functionality.
@github-actions
Copy link

github-actions bot commented Mar 4, 2026

PR Review: #2854 – Field For Uploading Files

This is a substantial and well-structured PR that implements multi-file upload support as a new file_upload field type. The overall design is solid — proper permission callbacks, activity logging, and server-side download proxying are all good patterns. Below are observations across security, code quality, and a few minor items.


Security

Good patterns:

  • All three new REST endpoints (storage_delete_single, storage_rename_single, storage_download) use DT_Posts::can_update() / DT_Posts::can_view() permission callbacks — correct.
  • storage_rename_single() uses sanitize_file_name() on $new_name before storing — good.
  • storage_download() verifies the requested file_key actually exists in the post's meta before generating a presigned URL — prevents insecure direct object reference.
  • Server-side presigned URL generation (no CORS exposure, no direct bucket access from client).

Concerns:

  1. Server-side file type validation is unclear.
    The storage_upload() function passes files directly to DT_Storage_API::upload_file() without validating the uploaded MIME type against the field's accepted_file_types setting. If DT_Storage_API is provided by a plugin that doesn't enforce this whitelist, arbitrary file types (including PHP scripts, SVGs with XSS, etc.) could be uploaded. Recommend adding an explicit check in storage_upload():

    $field_settings = DT_Posts::get_post_field_settings($post_type);
    $accepted_types = $field_settings[$meta_key]['accepted_file_types'] ?? [];
    if (!empty($accepted_types)) {
        // Validate $uploaded_file['type'] against $accepted_types
    }
  2. $_FILES data is passed through dt_recursive_sanitize_array() but file names are not run through sanitize_file_name().
    The name element from $_FILES is stored in post meta without this additional sanitization. The rename endpoint does use sanitize_file_name() — the upload should too for consistency.

  3. DateTimeImmutable used for duration validation via exception catching.
    In storage_upload():

    try {
        new \DateTimeImmutable( $storage_s3_url_duration );
        $url_params = [ 'duration' => $storage_s3_url_duration ];
    } catch ( \Exception $e ) {
        $url_params = [];
    }

    This works but silently swallows invalid inputs. A quick regex guard like /^[+-]\d+\s+(second|minute|hour|day|week|month|year)s?$/i would make intent explicit and prevent unusual date strings from being passed through.


Code Quality

exit in a REST callback (storage_download):
Using exit; at the end of storage_download() bypasses WordPress's normal REST response lifecycle (filters, rest_post_dispatch, etc.) and makes the endpoint harder to test. If the goal is to stream headers + redirect directly, that's understandable, but it's worth a comment explaining why and whether it's intentional given the presigned URL approach (where the client could just be redirected to the URL rather than the server streaming it).

Max file size: 0 and negatives are accepted.
In tab-custom-fields.php:

$custom_field['max_file_size'] = (int) $post_submission['visibility']['max_file_size'];

A value of 0 or negative will be stored and presumably interpreted as "no limit" or cause unexpected behavior. Adding max(0, ...) or a validation check would prevent misconfiguration.

Default accepted_file_types defined in at least three places:

  • dt-core/admin/menu/tabs/tab-custom-fields.php
  • dt-core/utilities/dt-components.php (render side)
  • dt-core/admin/js/dt-settings.js (admin JS)

If the default list ever needs updating, all three need to change together. Consider defining a canonical PHP constant or filter and having JS fetch it from the localized settings object.

file_upload added to $ignore_field_types in class-hook-posts.php:
This is intentional to avoid duplicate activity log entries (since upload/delete/rename operations log directly via dt_activity_insert()). Worth a brief inline comment explaining this is intentional, since it's a non-obvious omission:

'file_upload', // Logged directly per-operation via dt_activity_insert()

WordPress coding standards note:
The else if on line:

} else if ( isset( $custom_field['max_file_size'] ) ) {

Should be } elseif ( per WPCS.


Documentation

docs/dt-posts-list-query.md — good to see this updated to document the new file_upload filter options.

The field settings documentation (dt-posts-field-settings.md) doesn't appear to be updated. Adding an entry for file_upload with the available sub-settings (accepted_file_types, max_file_size, delete_enabled, download_enabled, rename_enabled, auto_upload, display_layout) would help plugin developers.


Summary

Area Status
Permission checks ✅ Good
Input sanitization (rename) ✅ Good
IDOR protection (download) ✅ Good
Server-side file type validation ⚠️ Unclear — depends on DT_Storage_API
File name sanitization on upload ⚠️ Missing sanitize_file_name()
exit in REST callback ⚠️ Intentional but should be documented
Max file size validation ⚠️ Accepts 0/negative
Default type duplication ℹ️ Minor maintenance concern
PHPCS (else if) ℹ️ Minor style issue

The main ask before merge would be clarifying/enforcing file type validation server-side and applying sanitize_file_name() to uploaded file names. Everything else is minor. Nice work on the overall feature design!

…s-endpoints.php. Improved security by sanitizing post type, ID, and meta key values. Added normalization for uploaded file metadata to prevent potential issues with file names and types. Updated file download headers to ensure safe handling of file names.
@github-actions
Copy link

github-actions bot commented Mar 4, 2026

Code Review: PR #2875 — Field For Uploading Files

Great feature addition! The file upload field is a significant and useful capability. I have done a thorough review of the diff and found several issues ranging from critical security concerns to minor improvements. Please address the critical and high-severity items before merging.


Critical

1. MIME type sanitization fallback restores raw input

File: dt-posts/dt-posts-endpoints.php

sanitize_mime_type() returns empty string when the input is invalid. The code falls back to the raw browser-supplied value, completely negating the sanitization. An attacker can submit a MIME type that fails sanitization, and the raw value gets stored in post meta and used as the Content-Type header in storage_download. Use 'application/octet-stream' as the fallback instead of the raw input.

2. storage_rename_single does not sanitize post_type or post_id

File: dt-posts/dt-posts-endpoints.php

The rename endpoint reads $post_type = $params['post_type'] and $post_id = $params['id'] without sanitize_text_field() or absint() respectively. Compare to storage_delete_single and storage_download which correctly apply those sanitizers. Both values are then passed to get_post_meta(), get_the_title(), and dt_activity_insert unsanitized. Apply the same pattern used in the other endpoints.

3. Content-Disposition header injection via unescaped filename

File: dt-posts/dt-posts-endpoints.php

The code strips CR/LF from the filename but does not escape double-quotes or backslashes before embedding it in the Content-Disposition header as a quoted-string. A filename containing a literal double-quote (e.g. foo".exe) breaks out of the quoted-string per RFC 6266, enabling header injection. Also escape backslash and double-quote characters before building the header value.

4. storage_download uses sanitize_text_field instead of absint for post ID

File: dt-posts/dt-posts-endpoints.php

storage_delete_single correctly uses absint() for the post ID. storage_download uses sanitize_text_field(), which allows values like -1 or 1.5 to reach get_post_meta(). Use absint() consistently across all endpoints.


High

5. display_layout not validated against allowlist

File: dt-core/admin/admin-settings-endpoints.php

Any arbitrary string is accepted and stored for display_layout, then emitted as a web component attribute on every page rendering the field. Validate against the expected values (e.g. ['grid', 'list']) using in_array() with strict comparison before storing.

6. accepted_file_types stored without MIME type validation

File: dt-core/admin/admin-settings-endpoints.php

The comma-separated accepted file types list is only processed with sanitize_text_field() — no validation that entries are valid MIME types or extension patterns. An admin could store '/' to bypass all upload type restrictions. Validate each entry against a MIME type allowlist.

7. storage_download loads entire file into PHP memory

File: dt-posts/dt-posts-endpoints.php

The endpoint calls wp_remote_get() with a 300-second timeout, buffers the entire response body in PHP memory with wp_remote_retrieve_body(), then echoes it. For large files this will exhaust the PHP memory limit and block a PHP worker for up to 5 minutes. The correct approach is to issue an HTTP redirect to the presigned S3 URL, letting the browser download directly from S3. Presigned URLs are already scoped and time-limited. If server-side mediation is required, use PHP chunked streaming instead of full buffering.

8. $icon output escaping regression in shared_attributes()

File: dt-core/utilities/dt-components.php

The previous code applied esc_html() to the icon value when building the attribute string. The new code removes this. Since $icon originates from field settings (admin-editable), a crafted font-icon value could break out of the attribute context. Restore esc_html() on the final icon value.

9. Race condition in concurrent file deletions

File: dt-posts/dt-posts-endpoints.php

The delete flow is: (1) fetch post meta, (2) find matching file, (3) delete from S3, (4) write updated array back. Two concurrent deletions on the same field will each read the same initial array and one write will overwrite the other — causing a deleted file to reappear in post meta while it is already gone from S3. Consider serializing deletes with a transient lock or using optimistic locking with a version check.


Medium

10. File type validated from browser-supplied $_FILES['type']

File: dt-posts/dt-posts-endpoints.php

$_FILES['type'] is provided by the browser and cannot be trusted for security decisions. The server should verify actual file content using wp_check_filetype_and_ext() (which inspects magic bytes) to confirm the content matches the claimed type. If DT_Storage_API::upload_file() already performs magic-byte validation, please add a comment noting this.

11. meta_key not validated as a registered file_upload field

All new storage endpoints.

In storage_delete_single, storage_rename_single, and storage_download, the meta_key parameter is sanitized as a string but never verified to be a registered file_upload field on the given post type. A user with can_update permission could pass arbitrary meta keys (e.g. post_status) to the delete endpoint, potentially corrupting unrelated post meta. Validate meta_key against DT_Posts::get_post_settings($post_type)['fields'] and confirm the field type is file_upload before proceeding.

12. No server-side file count limit per upload

File: dt-posts/dt-posts-endpoints.php

There is no upper bound on the number of files uploaded in a single request or accumulated per field. A user with can_update permission could exhaust storage or cause post meta serialization bloat. Add a configurable maximum file count limit enforced server-side.

13. Orphaned S3 objects on DB write failure

File: dt-posts/dt-posts-endpoints.php

If S3 deletion succeeds but update_post_meta() fails, the file is gone from storage but still referenced in the database. The reverse (S3 timeout, meta already updated) leaves orphaned S3 objects. At minimum, log failures so they can be reconciled manually.

14. SQL built via string interpolation in has_value_sql

File: dt-posts/posts.php

The new file_upload filter uses string interpolation to build SQL. While the specific values here are safe (wpdb table name, get_current_user_id()), WordPress coding standards require $wpdb->prepare() for all dynamic SQL queries. Please refactor to use prepared statements with placeholders.


Low

15. No link to created record on file upload failure

File: dt-assets/js/new-record.js

If a record is created successfully but a file upload then fails, the user is shown an error with no path to the newly-created record. The catch block should include a link to response.permalink so users can access the record even when uploads fail.

16. Download endpoint uses POST method for a read operation

File: dt-posts/dt-posts-endpoints.php

The download endpoint is registered as POST. Per REST semantics, read operations should use GET. Using POST prevents browser caching, bookmarking, and direct linking. Consider GET with query parameters, or (better) switching to presigned URL redirects as described in finding #7.

17. Admin UI strings not translatable

File: dt-core/admin/js/dt-settings.js

New strings in the file upload field settings modal are hardcoded English. All other admin PHP UI strings use esc_html_e() or __(). Use wp_localize_script() or @wordpress/i18n for these strings, or add a TODO comment noting the i18n gap.

18. Full serialized file arrays stored in activity log

File: dt-posts/dt-posts-endpoints.php

Activity log entries store the complete serialized array of all file objects (including S3 keys) as meta_value and old_value. For records with many files this causes significant activity log table bloat and leaks S3 key paths into the log. Consider logging only file names and counts.


Summary

Severity Count
Critical 4
High 5
Medium 5
Low 4

The most important items to address before merging are the MIME type fallback (#1), header injection (#3), missing input sanitization in rename (#2), and the full-file memory buffering in download (#7). The meta_key validation gap (#11) is also important as it is a potential privilege escalation vector for users with update access.

The overall architecture and field type integration looks solid — these are fixable issues rather than fundamental design problems.


Review generated with Claude Code

@kodinkat
Copy link
Collaborator Author

kodinkat commented Mar 4, 2026

@corsacca review ready.....

Screenshot 2026-03-04 at 14 43 45

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Field for uploading files.

3 participants