Skip to content

Performance Optimization

levi edited this page Nov 6, 2025 · 2 revisions

Performance Optimization

Table of Contents

  1. Server-Side Optimization
  2. Client-Side Optimization
  3. Data Batching and Bulk Updates
  4. Event Management with TVEventBus
  5. Connection and Data Streaming
  6. Common Performance Bottlenecks
  7. Monitoring and Debugging

Server-Side Optimization

The PyTradingView server leverages FastAPI and uvicorn to deliver high-performance, asynchronous handling of WebSocket connections and HTTP requests. The app.py file in the server module configures an ASGI server using uvicorn, enabling efficient concurrent processing of WebSocket connections. This setup is critical for real-time financial data applications where low latency and high throughput are essential.

The server implementation includes a restart mechanism with a maximum of three restart attempts, ensuring resilience against transient failures. The use of FastAPI's CORSMiddleware allows flexible cross-origin resource sharing, which is vital for web-based trading applications. The server is configured to run on localhost (127.0.0.1) with a specified port, minimizing network overhead and enhancing security by restricting external access.

flowchart TD
A[Client Request] --> B{FastAPI Router}
B --> C[WebSocket Connection]
B --> D[HTTP Request]
C --> E[uvicorn ASGI Server]
D --> E
E --> F[Business Logic]
F --> G[Data Processing]
G --> H[Response]
H --> I[Client]
Loading

Client-Side Optimization

Client-side performance in PyTradingView is optimized through efficient event handling, proper subscription cleanup, and resource management during chart reinitialization. The TVChart class provides methods to manage chart lifecycle events, such as onDataLoaded and onSymbolChanged, allowing developers to subscribe to specific events and perform actions when they occur.

Proper subscription management is crucial to prevent memory leaks. The TVSubscription class implements methods like subscribe, unsubscribe, and unsubscribeAll to manage event subscriptions. When a subscription is no longer needed, calling unsubscribe ensures that the callback is removed and resources are freed. This pattern is particularly important in long-running applications where charts are frequently created and destroyed.

sequenceDiagram
participant Client
participant TVChart
participant TVSubscription
Client->>TVChart : onDataLoaded()
TVChart->>TVSubscription : Create Subscription
TVSubscription->>Client : Return Subscription Object
Client->>TVSubscription : subscribe(callback)
TVSubscription->>Client : Invoke callback on data load
Client->>TVSubscription : unsubscribe()
TVSubscription->>TVSubscription : Cleanup resources
Loading

Data Batching and Bulk Updates

Data batching techniques are employed in the indicator engine to optimize calculations and reduce redundant operations. The TVEngineDrawing class in the indicators engine implements the run_indicators_for_chart method, which processes all active indicators for a given chart in a single pass. This approach minimizes the overhead of repeated data loading and context switching.

The applyStudiesOverrides method allows for bulk styling updates to indicators, reducing the number of individual API calls required to modify multiple properties. This is particularly useful when applying consistent visual themes across multiple indicators or when responding to user preferences.

flowchart TD
A[Start Indicator Processing] --> B{Chart Context Exists?}
B --> |Yes| C[Get Active Indicators]
B --> |No| D[Return Empty Results]
C --> E[Loop Through Indicators]
E --> F[Load Data]
F --> G[Calculate Signals]
G --> H[Generate Drawables]
H --> I{Chart Available?}
I --> |Yes| J[Draw Elements]
I --> |No| K[Skip Drawing]
J --> L[Store Results]
K --> L
L --> M{More Indicators?}
M --> |Yes| E
M --> |No| N[Return Results]
Loading

Event Management with TVEventBus

The TVEventBus class plays a critical role in decoupling components and preventing memory leaks by providing a publish-subscribe mechanism for event communication. This event bus implementation supports both synchronous and asynchronous event handling, allowing components to react to system events without tight coupling.

The event bus maintains a registry of subscribers for each event type, ensuring that events are delivered only to interested parties. When a component is destroyed, it should unsubscribe from events to prevent dangling references and memory leaks. The publish_sync method allows synchronous code to trigger events, while the publish method supports asynchronous event dispatching.

classDiagram
class EventBus {
+_subscribers : Dict[EventType, List[Callable]]
+_event_queue : asyncio.Queue
+_running : bool
+subscribe(event_type, callback)
+unsubscribe(event_type, callback)
+publish(event_type, data, source)
+publish_sync(event_type, data, source)
+clear_subscribers(event_type)
}
class EventType {
<<enumeration>>
WIDGET_CREATED
WIDGET_READY
CHART_READY
INDICATOR_LOADED
INDICATOR_CALCULATED
}
class Event {
+type : EventType
+data : Dict[str, Any]
+source : Optional[str]
}
EventBus --> Event : publishes
EventBus --> EventType : uses
Loading

Connection and Data Streaming

Efficient data streaming from custom datafeeds is achieved through the TVDatafeed class and its implementations like BADatafeed. These classes provide methods for subscribing to real-time bar updates (subscribeBars) and quote data (subscribeQuotes), enabling continuous data flow to the client.

Connection pooling is managed through the TVBridge class, which handles communication between Python and the frontend. The bridge establishes a WebSocket connection to the Node.js server, allowing bidirectional data exchange. This architecture minimizes connection overhead and ensures reliable message delivery.

sequenceDiagram
participant Frontend
participant TVBridge
participant Datafeed
participant DataSource
Frontend->>TVBridge : Connect
TVBridge->>DataSource : Establish Connection
Datafeed->>DataSource : subscribeBars()
DataSource->>Datafeed : Send Bar Updates
Datafeed->>TVBridge : Forward Data
TVBridge->>Frontend : Stream Data
Frontend->>TVBridge : Request Data
TVBridge->>Datafeed : getBars()
Datafeed->>DataSource : Fetch Historical Data
DataSource->>Datafeed : Return Data
Datafeed->>TVBridge : Process Data
TVBridge->>Frontend : Send Data
Loading

Common Performance Bottlenecks

Several common performance bottlenecks are addressed in the PyTradingView architecture. Memory leaks in long-running indicator engines are mitigated through proper subscription management and object disposal. The TVObject base class implements a dispose method that releases objects back to the pool, preventing accumulation of unused instances.

Excessive drawing operations are minimized by batching updates and using efficient rendering techniques. The indicator engine's clear_all_drawings method removes all graphics before redrawing, ensuring a clean state and preventing duplicate elements. Inefficient data processing is avoided by caching results and reusing computed values when possible.

flowchart TD
A[Identify Bottleneck] --> B{Memory Leak?}
B --> |Yes| C[Check Subscriptions]
C --> D[Ensure Unsubscribe Called]
D --> E[Verify Object Disposal]
E --> F[Use TVObjectPool]
B --> |No| G{Drawing Performance?}
G --> |Yes| H[Batch Updates]
H --> I[Use applyStudiesOverrides]
I --> J[Minimize Redraws]
G --> |No| K{Data Processing?}
K --> |Yes| L[Cache Results]
L --> M[Use Efficient Algorithms]
M --> N[Batch Calculations]
K --> |No| O[Optimize Network]
O --> P[Use Connection Pooling]
P --> Q[Compress Data]
Loading

Monitoring and Debugging

Monitoring strategies and debugging tools are essential for identifying performance issues in both development and production environments. The PyTradingView framework includes logging throughout its components, with detailed log messages for key operations such as server startup, event handling, and data processing.

The TVSubscribeManager class provides visibility into event subscriptions, logging when handlers are added or removed. This information is invaluable for diagnosing issues related to event propagation and memory management. Additionally, the use of type hints and runtime validation helps catch errors early in the development process.

For production monitoring, developers can implement custom metrics collection and integrate with external monitoring services. The modular architecture of the indicator engine allows for easy instrumentation of specific components, enabling granular performance tracking.

flowchart TD
A[Enable Logging] --> B[Monitor Server Logs]
B --> C{Performance Issue?}
C --> |Yes| D[Analyze Log Patterns]
D --> E[Identify Slow Operations]
E --> F[Profile Code]
F --> G[Optimize Algorithm]
G --> H[Test Changes]
H --> I[Deploy Fix]
C --> |No| J[Check Memory Usage]
J --> K{Memory Leak?}
K --> |Yes| L[Trace Object Lifetimes]
L --> M[Fix Subscription Management]
M --> H
K --> |No| N[Monitor Network]
N --> O{Latency Issues?}
O --> |Yes| P[Optimize Data Transfer]
P --> H
O --> |No| Q[System Healthy]
Loading

Clone this wiki locally