fastplotlib#
- pause_events(*graphics)[source]#
Context manager for pausing Graphic events.
Examples
# pass in any number of graphics with fpl.pause_events(graphic1, graphic2, graphic3): # enter context manager # all events are blocked from graphic1, graphic2, graphic3 # context manager exited, event states restored.
- enumerate_adapters()[source]#
Get a list of adapter objects available on the current system.
An adapter can then be selected (e.g. using its summary), and a device then created from it.
The order of the devices is such that Vulkan adapters go first, then Metal, then D3D12, then OpenGL. Within each category, the order as provided by the particular backend is maintained. Note that the same device may be present via multiple backends (e.g. vulkan/opengl).
We cannot make guarantees about whether the order of the adapters matches the order as reported by e.g.
nvidia-smi
. We have found that on a Linux multi-gpu cluster, the order does match, but we cannot promise that this is always the case. If you want to make sure, do some testing by allocating big buffers and checking memory usage usingnvidia-smi
.See pygfx/wgpu-py#482 for more details.
- Return type:
- select_adapter(adapter)[source]#
Select a specific adapter / GPU.
Select an adapter as obtained via
wgpu.gpu.enumerate_adapters_sync()
, which can be useful in multi-gpu environments.For example:
adapters = wgpu.gpu.enumerate_adapters_sync() adapters_tesla = [a for a in adapters if "Tesla" in a.summary] adapters_discrete = [a for a in adapters if "DiscreteGPU" in a.summary] pygfx.renderers.wgpu.select_adapter(adapters_discrete[0])
Note that using this function reduces the portability of your code, because it’s highly specific for your current machine/environment.
The order of the adapters returned by
wgpu.gpu.enumerate_adapters_sync()
is such that Vulkan adapters go first, then Metal, then D3D12, then OpenGL. Within each category, the order as provided by the particular backend is maintained. Note that the same device may be present via multiple backends (e.g. vulkan/opengl).We cannot make guarantees about whether the order of the adapters matches the order as reported by e.g.
nvidia-smi
. We have found that on a Linux multi-gpu cluster, the order does match, but we cannot promise that this is always the case. If you want to make sure, do some testing by allocating big buffers and checking memory usage usingnvidia-smi
.Example to allocate and check GPU mem usage:
import subprocess import wgpu import torch def allocate_gpu_mem_with_wgpu(idx): a = wgpu.gpu.enumerate_adapters_sync()[idx] d = a.request_device_sync() b = d.create_buffer(size=10*2**20, usage=wgpu.BufferUsage.COPY_DST) return b def allocate_gpu_mem_with_torch(idx): d = torch.device(f"cuda:{idx}") return torch.ones([2000, 10], dtype=torch.float32, device=d) def show_mem_usage(): print(subprocess.run(["nvidia-smi"]))
See pygfx/wgpu-py#482 for more details.
- print_wgpu_report()[source]#
Print a report on the internal status of WGPU. Can be useful in debugging, and for providing details when making a bug report.
fastplotlib.loop#
See the rendercanvas docs: https://rendercanvas.readthedocs.io/stable/api.html#rendercanvas.BaseLoop