Skip to content

feat: refactor cpu cache#1236

Open
blueswhen wants to merge 1 commit intomainfrom
refactor_cpucache
Open

feat: refactor cpu cache#1236
blueswhen wants to merge 1 commit intomainfrom
refactor_cpucache

Conversation

@blueswhen
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly refactors the CPU cache management system by introducing a new, dedicated module for handling shared memory tensors. It centralizes the logic for creating and attaching CPU cache tensors, improving modularity and reusability across different cache implementations. Additionally, memory allocation and management utilities have been moved to a separate module. This change streamlines the management of shared memory resources for both embedding and KV caches, making the system more robust and easier to maintain.

Highlights

  • New CPU Cache Module: Introduced a new lightllm.common.cpu_cache module containing CpuCacheTensorBackend and CpuCacheTensorSpec to centralize and abstract the creation and attachment of shared memory tensors for CPU caches.
  • Refactored Memory Management: The MemoryBlock and MemoryManager classes, previously embedded, were extracted into a dedicated lightllm.server.embed_cache.allocator module, improving modularity and reusability.
  • Unified Cache Initialization: Updated CpuEmbedCacheClient and CpuKvCacheClient to leverage the new CpuCacheTensorBackend for managing their respective shared memory tensors, replacing redundant, duplicated logic.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • lightllm/common/cpu_cache/init.py
    • Added CpuCacheTensorBackend and CpuCacheTensorSpec to the module's public interface.
  • lightllm/common/cpu_cache/tensor_backend.py
    • Added CpuCacheTensorSpec dataclass to define properties for CPU cache tensors.
    • Added CpuCacheTensorBackend class to manage the creation and attachment of CPU cache tensors using shared memory.
  • lightllm/server/embed_cache/allocator.py
    • Added MemoryBlock class to represent continuous memory regions.
    • Added MemoryManager class to handle memory allocation and deallocation using sorted sets for efficient management.
  • lightllm/server/embed_cache/embed_cache_client.py
    • Removed direct imports for ctypes, numpy, and SortedSet.
    • Imported CpuCacheTensorBackend, CpuCacheTensorSpec, MemoryBlock, MemoryManager, and offload_embed_tensor_to_cache.
    • Refactored the __init__ method to use CpuCacheTensorBackend for creating or attaching the CPU embed cache tensor.
    • Removed redundant from .copy_to_cache import offload_embed_tensor_to_cache statements within methods.
    • Removed _create_shm_embed_kv_cache and _attach_shm_cpu_embed_cache methods.
    • Added _build_tensor_spec method to encapsulate the creation of CpuCacheTensorSpec.
  • lightllm/server/embed_cache/impl/naive_memory_cache.py
    • Removed MemoryBlock and SortedSet imports from embed_cache_client.
    • Imported SortedSet directly and MemoryBlock from the new allocator module.
  • lightllm/server/multi_level_kv_cache/cpu_cache_client.py
    • Removed direct imports for ctypes, torch, numpy, create_shm_kv_cache_ptr, attach_shm_kv_cache_ptr, and register_shm_ptr_to_pin.
    • Imported CpuCacheTensorBackend and CpuCacheTensorSpec.
    • Refactored the __init__ method to use CpuCacheTensorBackend for creating or attaching the CPU KV cache tensor.
    • Removed _create_shm_cpu_kv_cache and _attach_shm_cpu_kv_cache methods.
    • Added _build_tensor_spec method to encapsulate the creation of CpuCacheTensorSpec.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request refactors the CPU cache management by introducing a new CpuCacheTensorBackend to centralize shared memory tensor operations, which is a great improvement for code structure and maintainability. The MemoryManager is also extracted into its own file. The refactoring is applied consistently to both the embedding cache and the multi-level KV cache clients. My review includes a fix for a high-severity bug in the new memory allocator that could cause incorrect memory merging, and some suggestions to improve code consistency by translating comments to English.

Comment on lines +69 to +70
for index in [finded_index - 1, finded_index, finded_index + 1]:
if index < len(self.mem_set_by_start):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The logic for finding adjacent blocks to merge has a bug and can be optimized.

  1. Bug: The condition if index < len(self.mem_set_by_start) is incorrect when finded_index is 0. In that case, index becomes -1, and self.mem_set_by_start[-1] in Python accesses the last element of the list, which is not the intended neighbor. This can lead to incorrect merges.
  2. Optimization: The loop for index in [finded_index - 1, finded_index, finded_index + 1] checks more indices than necessary. Since mem_set_by_start is sorted by start address, you only need to check the block immediately before (finded_index - 1) and at (finded_index) the insertion point of the released block.
Suggested change
for index in [finded_index - 1, finded_index, finded_index + 1]:
if index < len(self.mem_set_by_start):
for index in [finded_index - 1, finded_index]:
if 0 <= index < len(self.mem_set_by_start):



class MemoryBlock:
"""内存块类,表示一个连续的内存区域"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the rest of the codebase which is in English, it would be better to write docstrings and comments in English. This improves maintainability for a wider audience.

Suggested change
"""内存块类,表示一个连续的内存区域"""
"""A memory block, representing a contiguous memory region."""

Comment on lines +25 to +28
"""
初始化内存管理器
:param total_size: 总内存大小
"""
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the rest of the codebase which is in English, it would be better to write docstrings and comments in English. This improves maintainability for a wider audience.

Suggested change
"""
初始化内存管理器
:param total_size: 总内存大小
"""
"""
Initializes the memory manager.
:param total_size: The total size of memory to manage.
"""

)
self.release(merge_block)
return
# 无法merge时,直接add
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For consistency with the rest of the codebase which is in English, it would be better to write comments in English. This improves maintainability for a wider audience.

Suggested change
# 无法merge时,直接add
# If it can't be merged, add it directly

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant