On this page
CUDAGraph
class torch.cuda.CUDAGraph
[source]-
Wrapper around a CUDA graph.
Warning
This API is in beta and may change in future releases.
capture_begin(pool=None)
[source]-
Begins capturing CUDA work on the current stream.
Typically, you shouldn’t call
capture_begin
yourself. Usegraph
ormake_graphed_callables()
, which callcapture_begin
internally.- Parameters:
-
pool (optional) – Token (returned by
graph_pool_handle()
orother_Graph_instance.pool()
) that hints this graph may share memory with the indicated pool. See Graph memory management.
capture_end()
[source]-
Ends CUDA graph capture on the current stream. After
capture_end
,replay
may be called on this instance.Typically, you shouldn’t call
capture_end
yourself. Usegraph
ormake_graphed_callables()
, which callcapture_end
internally.
pool()
[source]-
Returns an opaque token representing the id of this graph’s memory pool. This id can optionally be passed to another graph’s
capture_begin
, which hints the other graph may share the same memory pool.
replay()
[source]-
Replays the CUDA work captured by this graph.
reset()
[source]-
Deletes the graph currently held by this instance.
© 2024, PyTorch Contributors
PyTorch has a BSD-style license, as found in the LICENSE file.
https://pytorch.org/docs/1.13/generated/torch.cuda.CUDAGraph.html