-
Notifications
You must be signed in to change notification settings - Fork 144
LangGraph Plugin for Temporal Python SDK #1263
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Setup package structure for LangGraph Temporal integration prototypes. These are throwaway prototypes to validate technical assumptions before implementing the production integration. Package structure: - temporalio/contrib/langgraph/ - Main package (empty for now) - temporalio/contrib/langgraph/_prototypes/ - Validation prototypes Prototypes to implement: 1. Pregel loop - Validate AsyncPregelLoop submit function injection 2. Write capture - Validate CONFIG_KEY_SEND callback mechanism 3. Task interface - Document PregelExecutableTask structure 4. Serialization - Test state/message serialization 5. Graph builder - Test graph reconstruction approaches
Validates that we can inject a custom submit function into LangGraph's Pregel execution loop via CONFIG_KEY_RUNNER_SUBMIT config key. Key findings: - Submit injection works for parallel graph execution - Sequential graphs use fast path and may not call submit - PregelExecutableTask provides: name, id, input, proc, config, writes Tests cover: - Basic graph execution (async/sync) - Submit injection with sequential and parallel graphs - PregelExecutableTask attribute inspection
Import CONFIG_KEY_RUNNER_SUBMIT from langgraph._internal._constants instead of langgraph.constants to avoid the deprecation warning. The mechanism is still used internally by LangGraph - the public export just warns because it's considered private API. We document this decision and note that future LangGraph versions may change this API.
PregelExecutableTask is a dataclass, not a NamedTuple. Update test to use dataclasses.fields() instead of checking _fields attribute. Validates that: - PregelExecutableTask is a dataclass - Has 'writes' field of type deque[tuple[str, Any]] - Writes are captured correctly after task execution
Document PregelExecutableTask dataclass structure: - Core fields: name, id, path, input, proc, config, triggers - Output: writes (deque), writers - Policy: retry_policy, cache_key - Nested: subgraphs Includes config filtering for serialization: - Filters __pregel_* and __lg_* internal keys - Filters non-JSON-serializable values - Preserves user keys and standard config VALIDATION STATUS: PASSED
Validates that LangGraph state can be serialized for Temporal activities: - LangChain messages are Pydantic models (HumanMessage, AIMessage, etc.) - Temporal pydantic_data_converter handles them automatically - Default converter works for basic dict states - End-to-end tests verify workflow -> activity -> workflow round-trip Key findings: - Use pydantic_data_converter for LangChain message types - Configure sandbox to passthrough langchain_core modules - No custom serialization needed VALIDATION STATUS: PASSED
- graph_builder_proto.py: Updated with latest prototype changes - graph_registry_proto.py: Thread-safe graph caching prototype - VALIDATION_SUMMARY.md: Phase 1 validation results and findings These prototypes validated the architecture for Phase 2 production code.
Prototypes are preserved in git history (commit 5521520). Production code is now in: - _models.py, _runner.py, _plugin.py, _activities.py, _graph_registry.py
…ode config Phase 3 - Activity and Write Capture: - Fix activity to inject CONFIG_KEY_SEND callback for proper write capture - Writes are captured via LangGraph internal writer mechanism - Add 3 activity integration tests validating real node execution Phase 4 - Per-Node Configuration: - Support activity_timeout via node metadata - Support task_queue via node metadata - Support heartbeat_timeout via node metadata - Map LangGraph RetryPolicy to Temporal RetryPolicy - Add 4 configuration tests File structure follows OpenAI agents SDK pattern (all internal modules use _ prefix). All 30 tests passing.
Key changes: - Rewrote runner to use LangGraph AsyncPregelLoop for proper graph traversal - Fixed conditional edge routing by merging input_state with captured writes - Added CONFIG_KEY_READ callback to activity for state reading support - Added example.py with customer support agent demonstrating conditional routing - Fixed LangChain message serialization through Pydantic payload converter
Change from sequential to parallel task execution using asyncio.gather(). This improves performance while maintaining BSP (Bulk Synchronous Parallel) correctness - all tasks still complete before after_tick() is called. Before (sequential): Total time = sum of all activity durations After (parallel): Total time ≈ max of activity durations Also adds the consolidated design document (renamed from v3).
- Add native LangGraph interrupt API matching `__interrupt__` return pattern - Support resume with `Command(resume=value)` after interrupt - Track interrupted node name to correctly route resume values - Add PregelScratchpad setup in activities for interrupt() function - Remove GraphInterrupt exception in favor of return value API Tests added: - Unit tests for interrupt models and activity behavior - Integration tests for runner interrupt/resume flow - E2E tests with real Temporal worker for full interrupt cycle
- Fix multi-interrupt resume by preserving completed nodes across invocations instead of resetting them. This prevents nodes like step1 from re-running and re-interrupting when resuming step2. - Merge resumed node writes into input_state before starting the loop to ensure writes are included in final output even if loop doesn't schedule the resumed node. - Add invocation counter for unique activity IDs across workflow replays. - Add comprehensive e2e tests with real Temporal workers testing: - Simple graph execution without interrupts - Single interrupt with signal-based resume (approval flow) - Interrupt with rejection - Multiple sequential interrupts - Fix type errors in test_langgraph.py by renaming lambda parameter from 's' to 'state' to match LangGraph's type annotations.
- Add StateSnapshot model for checkpoint data - Add get_state() method to runner for extracting checkpoints - Add checkpoint parameter to compile() for restoring from checkpoint - Add should_continue callback to ainvoke() for external execution control - Callback is invoked once per tick (BSP superstep) - When should_continue() returns False, returns __checkpoint__ in result - Add unit tests for checkpoint extraction, restoration, and should_continue - Update ContinueAsNewWorkflow example to demonstrate the pattern
Implements Phase 1 of Store support as described in DESIGN_STORE.md: - Add ActivityLocalStore class that captures writes for replay in workflow - Add StoreItem, StoreWrite, StoreSnapshot models for serialization - Update NodeActivityInput/Output with store_snapshot and store_writes - Add _store_state to runner for canonical store data - Inject store via Runtime object so nodes can use get_store() - Include store_state in StateSnapshot for checkpoint/continue-as-new - Add comprehensive unit tests for store models and ActivityLocalStore Store operations work by: 1. Workflow maintains _store_state dict with all store data 2. Before activity: Runner creates StoreSnapshot from current state 3. In activity: ActivityLocalStore serves reads from snapshot, captures writes 4. After activity: Runner applies store_writes to _store_state 5. On checkpoint: Store state is serialized for continue-as-new
- Add test_store_persistence: verifies store data persists across nodes within a single graph invocation (node1 writes, node2 reads) - Add test_store_persistence_across_invocations: verifies store data persists across multiple ainvoke() calls within the same workflow - Fix activity to always create ActivityLocalStore (even when empty) so get_store() works on first invocation - Add MultiInvokeStoreWorkflow and counter_node for multi-invocation test
- Implement Send API for dynamic parallelism (map-reduce patterns) - Add SendPacket model to serialize Send objects - Capture Send objects separately from regular writes in activities - Execute SendPackets as separate activities with Send.arg as input - Add validation tests for Send API, Subgraphs, and Command API - Update MISSING_FEATURES.md to reflect validated features: - Send API: ✅ Implemented - Subgraphs: ✅ Implemented (native Pregel support) - Command API: ✅ Implemented (native Pregel support)
Internal design documents created during Phase 1 implementation. These will be removed in the next commit but preserved in history.
Add comprehensive README for end users covering: - Quick start example - Per-node configuration (timeouts, retries, task queues) - Human-in-the-loop with interrupt() and signals - Store API for cross-node persistence - Continue-as-new for long-running workflows - Compile options reference - Important notes and compatibility table
…ions Add a type-safe helper function for configuring LangGraph nodes with Temporal activity options, replacing untyped dict metadata. Changes: - Add temporal_node_metadata() with all execute_activity options: - schedule_to_close_timeout, schedule_to_start_timeout - start_to_close_timeout, heartbeat_timeout, task_queue - retry_policy, cancellation_type, versioning_intent - summary, priority, run_in_workflow - Consolidate _get_node_* methods into single _get_node_activity_options() - Support dict merge operator (|) for combining with other metadata - Update example.py and README.md to use the new helper - Maintain backwards compatibility with legacy "activity_timeout" key
Replace separate default_activity_timeout, default_max_retries, and default_task_queue parameters with a single defaults parameter that accepts temporal_node_metadata() output. This provides a consistent API where all Temporal configuration uses the same typed helper function, both for per-node metadata and for compile-time defaults. Changes: - compile() now accepts defaults=temporal_node_metadata(...) - TemporalLangGraphRunner.__init__() updated to accept defaults dict - _get_node_activity_options() merges defaults with node metadata - retry_policy priority: node metadata > LangGraph native > defaults > built-in - Updated README, example.py, and tests
- Rename temporal_node_metadata() to node_activity_options() - Rename defaults parameter to default_activity_options - Rename node_config parameter to per_node_activity_options The new naming is clearer about the purpose of each function/parameter and avoids confusion when used outside the langgraph contrib module.
Add default_activity_options and per_node_activity_options parameters to LangGraphPlugin, allowing users to set activity configuration at the plugin level instead of repeating it in every compile() call. Configuration priority (highest to lowest): 1. Node metadata from add_node(metadata=...) 2. per_node_activity_options from compile() 3. per_node_activity_options from LangGraphPlugin() 4. default_activity_options from compile() 5. default_activity_options from LangGraphPlugin() 6. Built-in defaults (5 min timeout, 3 retries) Options at each level are merged, so users can set base defaults at the plugin level and selectively override specific options.
…ic execution Add wrappers that allow LangChain tools and chat models to execute as Temporal activities when running inside a workflow. This enables durable execution of agentic nodes (like create_react_agent) where individual tool calls and LLM invocations are each executed as separate activities. New public APIs: - temporal_tool(): Wraps LangChain tools for activity execution - temporal_model(): Wraps LangChain chat models for activity execution - register_tool(): Register tools for activity-side lookup - register_model(): Register model instances - register_model_factory(): Register model factory functions Internal additions: - Tool and model registries for activity-side lookup - execute_tool and execute_chat_model activities - ToolActivityInput/Output and ChatModelActivityInput/Output models Includes comprehensive tests and e2e test with create_react_agent.
- Add support for LangChain 1.0+ create_agent alongside legacy create_react_agent - Add temporal_node_metadata() helper for combining activity options with run_in_workflow flag - Remove run_in_workflow from node_activity_options() - it should only be specified per-node, not as a default - Update README with Agentic Execution section and Hybrid Execution examples - Update design doc with helper functions documentation (section 5.3.6) - Add tests for temporal_node_metadata()
Reorganize the LangGraph test suite into focused, well-organized files: - conftest.py: Shared fixtures for registry clearing - e2e_graphs.py: All graph builders for E2E tests - e2e_workflows.py: Consolidated workflow definitions - test_e2e.py: All 11 E2E tests in organized classes - test_models.py: Pydantic model tests (21 tests) - test_registry.py: Registry tests (14 tests) - test_plugin.py: Plugin tests (6 tests) - test_runner.py: Runner tests (7 tests) - test_activities.py: Activity tests (7 tests) - test_store.py: Store tests (7 tests) - test_temporal_tool.py: Tool wrapper tests (7 tests) - test_temporal_model.py: Model wrapper tests (7 tests) Fix graph registry to eagerly build graphs at registration time. This ensures graph compilation happens outside the workflow sandbox, avoiding issues with Annotated type resolution inside the sandbox. All 11 E2E workflows now run with sandbox enabled, using imports_passed_through() for langchain imports where needed. Deleted: test_langgraph.py, test_validation.py, test_temporal_tool_model.py Total: 87 tests (76 unit + 11 E2E)
- Add experimental warnings to module docstring, LangGraphPlugin, compile(), temporal_tool(), and temporal_model() - Improve README with introduction section, table of contents, and architecture diagram - Remove UnsandboxedWorkflowRunner usage (e2e tests pass with sandbox) - Mark module as experimental (may be abandoned)
Add comprehensive documentation explaining why we use LangGraph's internal APIs (langgraph._internal.*) for node execution: - CONFIG_KEY_SEND/READ/SCRATCHPAD/RUNTIME/CHECKPOINT_NS - PregelScratchpad class These are needed because we execute nodes as individual activities outside of LangGraph's Pregel execution loop, requiring us to inject the same context Pregel would normally provide. Documents risks and alternatives considered.
LangGraph's get_config() and get_store() functions read from the var_child_runnable_config ContextVar. When ainvoke() is called on a node runnable, it uses copy_context() which copies the current state of all ContextVars. Previously, we weren't setting this ContextVar before calling ainvoke(), so nodes that use get_config() or get_store() would fail with: "RuntimeError: Called get_config outside of a runnable context" This fix sets the ContextVar before invoking the node and resets it in a finally block to ensure cleanup.
Add support for nodes to execute directly in the workflow using temporal_node_metadata(run_in_workflow=True). This allows nodes to call Temporal operations like activities, child workflows, and signals. Implementation uses workflow.unsafe.sandbox_unrestricted() to allow LangGraph callback machinery to work inside the workflow sandbox, following the pattern from langchain_interceptor.py. Changes: - Update _execute_in_workflow() to use sandbox_unrestricted() - Add documentation for run_in_workflow in README - Add e2e test with activity called from workflow node
Refactor _execute_in_workflow to run user's node function under sandbox restrictions while only using sandbox_unrestricted for LangGraph's ChannelWrite step. This follows the langchain_interceptor.py pattern. Structure: - Extract user function from RunnableSeq.steps[0] (afunc or func) - Execute user function SANDBOXED (catches non-deterministic code) - Execute ChannelWrite UNRESTRICTED (handles state/edge traversal) - Fallback to full unrestricted for complex runnables
Verify that non-deterministic code (using random module) in a run_in_workflow node is caught by the sandbox, blocking the workflow.
- Update Quick Start with ClientConfig pattern for env var support - Improve run_in_workflow example with realistic orchestrator pattern - Add Subgraph Support section with explicit and agent subgraph examples - Add Human-in-the-Loop section with queries and signal handling - Add Using Temporal Signals/Updates Directly subsection - Add Command API (Dynamic Routing) section - Add Sample Applications section linking to langgraph_plugin branch - Update Important Notes with specific activity names - Add external streaming option (Redis) - Remove redundant Compatibility table
Add two methods to TemporalLangGraphRunner for visualizing graph structure and execution state: - get_graph_mermaid(): Returns Mermaid diagram with nodes colored by status (green=completed, yellow=current/interrupted, gray=pending) - get_graph_ascii(): Returns ASCII art diagram with progress indicators (✓ completed, ▶ current/interrupted, ○ pending) These methods can be exposed via Temporal workflow queries to provide user-friendly visibility into graph execution progress.
Changed _execute_loop_tasks to use asyncio.gather() for concurrent execution instead of sequential for loop. LangGraph Bulk Synchronous Parallel model requires all ready tasks in a step execute in parallel. Added unit test verifying concurrent execution using asyncio.Barrier.
Tests for the new functional API integration with Temporal: - test_functional_models.py: Tests for TaskActivityInput, TaskActivityOutput, FunctionalRunnerConfig - test_functional_registry.py: Tests for EntrypointRegistry and global registry functions - test_functional_future.py: Tests for TemporalTaskFuture and InlineFuture - test_functional_activity.py: Tests for execute_langgraph_task activity - test_functional_runner.py: Tests for TemporalFunctionalRunner and compile_functional 96 unit tests covering all new classes and functions.
Add the complete Functional API implementation for LangGraph integration: - _functional_runner.py: TemporalFunctionalRunner and compile_functional - _functional_activity.py: Task activity execution - _functional_registry.py: Entrypoint registration - _functional_future.py: Task future handling - _functional_models.py: Data models - _functional_plugin.py: LangGraphFunctionalPlugin Also fix lint issues (import sorting, formatting, docstrings).
Add is_running() and run_in_executor() methods to _WorkflowInstanceImpl to support LangGraph Pregel executor which uses these asyncio methods: - is_running(): Returns True since workflow event loop is always running - run_in_executor(): Runs sync functions directly without threading Also add langsmith to passthrough modules and e2e tests for functional API.
Instead of trying to intercept Pregel CONFIG_KEY_CALL (which it always overwrites), we now execute the entrypoint function directly with our callback injected. This ensures @task calls are routed to Temporal activities. Key changes: - Extract entrypoint function from Pregel wrapper - Build execution config with CONFIG_KEY_CALL, CONFIG_KEY_SCRATCHPAD, CONFIG_KEY_RUNTIME injected - Execute function directly with context var set - Handle GraphInterrupt and ParentCommand exceptions - Fix TaskActivityOutput unwrapping in TemporalTaskFuture.__await__()
- Add _get_task_options() to unwrap {"temporal": {...}} format
- Update _get_task_timeout() to use the new unwrapping
- Update activity_options() docstring to include Functional API usage
The activity_options() helper now works consistently across both
Graph API and Functional API configurations.
295a7fb to
63f3508
Compare
- Create unified LangGraphPlugin supporting both StateGraph and @entrypoint - Add auto-detection to compile() for returning correct runner type - Fix _is_entrypoint() to distinguish @entrypoint (Pregel) from StateGraph.compile() (CompiledStateGraph) by checking class type and presence of __start__ node - Fix timedelta serialization in _filter_config() by excluding temporal options from metadata (handled separately by _get_node_activity_options) - Update tests to use real graphs instead of MagicMock
63f3508 to
eeeaf00
Compare
Since this library has not been released yet, remove all backward compatibility code for a cleaner implementation: - Delete _functional_plugin.py (deprecated LangGraphFunctionalPlugin) - Remove compile_functional from _functional_runner.py - Remove backward compatibility exports from __init__.py - Update tests to use unified compile() and LangGraphPlugin APIs
- Rewrite LANGGRAPH_PLUGIN_DESIGN.md to v4.0 (Complete) - Add Functional API Integration section - Remove historical V1/V2 comparisons - Remove outdated FUNCTIONAL_API_PROPOSAL.md - Remove MISSING_FEATURES.md
Implement InMemoryCache that stores task results and can be serialized for continue-as-new workflows. This allows tasks executed in one workflow run to be cached and reused in subsequent runs after continue-as-new. Key changes: - Add _functional_cache.py with InMemoryCache implementing LangGraph cache interface (get/set/clear) with serialization support - Update _functional_runner.py to use cache for task results and support checkpoint parameter for restoring cache state - Update _functional_future.py with on_result callback for caching - Fix compile() to pass checkpoint parameter to Functional API runner Add e2e tests: - test_continue_as_new_with_checkpoint: 3 tasks across 3 workflow runs - test_partial_execution_five_tasks: 5 tasks, 3 in first run, 2 in second
Add section 10 explaining how continue-as-new works with task result caching, including usage pattern and key differences from Graph API. Update file structure table with _functional_cache.py and _functional_future.py.
Document @entrypoint/@task usage with Temporal including: - Basic usage pattern with task and entrypoint decorators - Continue-as-new with task result caching - Task requirements (must be module-level functions) - Key differences from Graph API checkpointing
Split sample applications into Graph API and Functional API sections. Add continue_as_new sample for Functional API. Update links to point to graph_api/ and functional_api/ directories.
|
|
||
|
|
||
| @activity.defn(name="tool_node") | ||
| async def langgraph_tool_node(input_data: NodeActivityInput) -> NodeActivityOutput: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Why not just use summary in that case?
| Returns: | ||
| Configured AsyncPregelLoop instance. | ||
| """ | ||
| with workflow.unsafe.imports_passed_through(): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think there's a bunch of places pass throughs shouldn't be needed explicitly since we are marking langgraph as pass through.
|
|
||
| ### Positive | ||
|
|
||
| 1. **Sandbox passthrough is limited:** Only `pydantic_core`, `langchain_core`, `annotated_types` are passed through. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Claude is actually lying here.
| | Worker | | ||
| | +----------------------------------------------+ | | ||
| | | Workflow Code | | | ||
| | | (LangGraph Orchestration) | | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Not sure if you want to point out here that the workflow can host nodes as well, or if that's too detailed.
| graphs={"my_graph": build_my_graph}, | ||
| # Default options for all nodes across all graphs | ||
| default_activity_options=activity_options( | ||
| start_to_close_timeout=timedelta(minutes=10), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
10 minutes is a long time to wait for a retry. I doubt it's a good default. And people will definitely copy/paste this unthinkingly.
| # Default options for all nodes across all graphs | ||
| default_activity_options=activity_options( | ||
| start_to_close_timeout=timedelta(minutes=10), | ||
| retry_policy=RetryPolicy(maximum_attempts=5), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
default_node_activity_options to make this less abstract?
Would there also be default_llm_activity_options? For example, I understand LLM calls should have maximum attempts to minimize usage, whereas it's not clear to me that arbitrary nodes should limit retries--can/should those defaults be separately configurable?
| # Per-node options (applies to all graphs with matching node names) | ||
| per_node_activity_options={ | ||
| "llm_call": activity_options( | ||
| start_to_close_timeout=timedelta(minutes=30), |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
30 minutes is an anti-pattern IMO unless you're presenting a heartbeat. Such a node will not be drain-friendly on any production system, and if something goes wrong it will take forever to retry.
| def is_non_retryable_error(exc: BaseException) -> bool: | ||
| ``` | ||
| The error classifier correctly identifies: | ||
| - Non-retryable: `TypeError`, `ValueError`, `AuthenticationError`, 4xx HTTP errors |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Except for AuthenticationError, these are retryable in a temporal context if users want the simplicity of deploying to fix their stuck workflows.
That is, "preconditions not met" errors can be non-retryable but developer-facing errors can be.
| This integration combines [LangGraph](https://github.com/langchain-ai/langgraph) with [Temporal's durable execution](https://docs.temporal.io/evaluate/understanding-temporal#durable-execution). | ||
| It allows you to build durable agents that never lose their progress and handle long-running, asynchronous, and human-in-the-loop workflows with production-grade reliability. | ||
|
|
||
| Temporal and LangGraph are complementary technologies. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This could use a little more development because it's such a confusing situation that isn't clarified by the LangGraph docs, AFAICT.
To put this in terms familiar to LangGraph users, Temporal is operating as the Persistence Layer.
https://docs.langchain.com/oss/python/langgraph/durable-execution
They have docs on integrating with "Checkpointer Libraries."
https://docs.langchain.com/oss/python/langgraph/persistence#checkpointer-libraries
I'm guessing it won't do everything we want and we will need other hooks, but should we be creating a TemporalSaver (alongside InMemorySaver, PostgresSaver that does part of this so that we can play as well as possible with their existing extensibility points, for maintainability and ecosystem health?
Summary
This PR adds a LangGraph plugin (
temporalio.contrib.langgraph) that enables running LangGraph applications as durable Temporal workflows. It supports both the Graph API (StateGraph) and Functional API (@entrypoint/@task), with each node or task executing as a separate Temporal activity.Benefits
activity_options()Supported APIs
Graph API (StateGraph)
Functional API (@entrypoint/@task)
Key Features
Full LangGraph API Support:
interrupt()andCommand(resume=...)for human-in-the-loop workflowsNative Agent Support: Works with
create_agent/create_react_agentwithout special wrappersContinue-as-New: Checkpoint support for long-running workflows via
get_state()Unified Plugin: Single
LangGraphPluginauto-detects Graph API vs Functional APIArchitecture
_plugin.py_graph_registry.py_functional_registry.py_runner.py_functional_runner.py_activities.py_functional_activity.py_models.py_exceptions.py_store.pyTest Plan
Notes
langgraph>=0.3.31andlangchain-core>=0.3.31