添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
体贴的太阳  ·  jmeter - Convert har ...·  2 年前    · 
Get Started

Run PyTorch locally or get started quickly with one of the supported cloud platforms

Tutorials

Whats new in PyTorch tutorials

Learn the Basics

Familiarize yourself with PyTorch concepts and modules

PyTorch Recipes

Bite-size, ready-to-deploy PyTorch code examples

Intro to PyTorch - YouTube Series

Master PyTorch basics with our engaging YouTube tutorial series

Community

Join the PyTorch developer community to contribute, learn, and get your questions answered

Forums

A place to discuss PyTorch code, issues, install, research

Developer Resources

Find resources and get questions answered

Contributor Awards - 2024

Award winners announced at this year's PyTorch Conference

ExecuTorch

End-to-end solution for enabling on-device inference capabilities across mobile and edge devices

ExecuTorch Docs Community Stories

Learn how our community solves real, everyday machine learning problems with PyTorch

Events

Find events, webinars, and podcasts

Newsletter

Stay up-to-date with the latest updates

  • CPU threading and TorchScript inference
  • CUDA semantics
  • PyTorch Custom Operators Landing Page
  • Distributed Data Parallel
  • Extending PyTorch
  • Extending torch.func with autograd.Function
  • Frequently Asked Questions
  • FSDP Notes
  • Getting Started on Intel GPU
  • Gradcheck mechanics
  • HIP (ROCm) semantics
  • Features for large-scale deployments
  • LibTorch Stable ABI
  • Modules
  • MPS backend
  • Multiprocessing best practices
  • Numerical accuracy
  • Reproducibility
  • Serialization semantics
  • Windows FAQ
  • Language Bindings

  • Javadoc
  • torch::deploy
  • Python API

  • torch
  • torch.nn
  • torch.nn.functional
  • torch.Tensor
  • Tensor Attributes
  • Tensor Views
  • torch.amp
  • torch.autograd
  • torch.library
  • torch.accelerator
  • torch.cpu
  • torch.cuda
  • Understanding CUDA Memory Usage
  • Generating a Snapshot
  • Using the visualizer
  • Snapshot API Reference
  • torch.mps
  • torch.xpu
  • torch.mtia
  • torch.mtia.memory
  • Meta device
  • torch.backends
  • torch.export
  • torch.distributed
  • torch.distributed.tensor
  • torch.distributed.algorithms.join
  • torch.distributed.elastic
  • torch.distributed.fsdp
  • torch.distributed.fsdp.fully_shard
  • torch.distributed.tensor.parallel
  • torch.distributed.optim
  • torch.distributed.pipelining
  • torch.distributed.checkpoint
  • torch.distributions
  • torch.compiler
  • torch.fft
  • torch.func
  • torch.futures
  • torch.fx
  • torch.fx.experimental
  • torch.hub
  • torch.jit
  • torch.linalg
  • torch.monitor
  • torch.signal
  • torch.special
  • torch.overrides
  • torch.package
  • torch.profiler
  • torch.nn.init
  • torch.nn.attention
  • torch.onnx
  • torch.optim
  • Complex Numbers
  • DDP Communication Hooks
  • Quantization
  • Distributed RPC Framework
  • torch.random
  • torch.masked
  • torch.nested
  • torch.Size
  • torch.sparse
  • torch.Storage
  • torch.testing
  • torch.utils
  • torch.utils.benchmark
  • torch.utils.bottleneck
  • torch.utils.checkpoint
  • torch.utils.cpp_extension
  • torch.utils.data
  • torch.utils.deterministic
  • torch.utils.jit
  • torch.utils.dlpack
  • torch.utils.mobile_optimizer
  • torch.utils.model_zoo
  • torch.utils.tensorboard
  • torch.utils.module_tracker
  • Type Info
  • Named Tensors
  • Named Tensors operator coverage
  • torch.__config__
  • torch.__future__
  • torch._logging
  • Torch Environment Variables
  • Libraries

  • torchaudio
  • TorchData
  • TorchRec
  • TorchServe
  • torchtext
  • torchvision
  • PyTorch on XLA Devices
  • torchao
  • torch. unique ( input , sorted = True , return_inverse = False , return_counts = False , dim = None ) tuple [ Tensor , Tensor , Tensor ] [source]

    Returns the unique elements of the input tensor.

    This function is different from torch.unique_consecutive() in the sense that this function also eliminates non-consecutive duplicate values.

    Currently in the CUDA implementation and the CPU implementation, torch.unique always sort the tensor at the beginning regardless of the sort argument. Sorting could be slow, so if your input tensor is already sorted, it is recommended to use torch.unique_consecutive() which avoids the sorting.

    Parameters
  • input ( Tensor ) – the input tensor

  • sorted ( bool ) – Whether to sort the unique elements in ascending order before returning as output.

  • return_inverse ( bool ) – Whether to also return the indices for where elements in the original input ended up in the returned unique list.

  • return_counts ( bool ) – Whether to also return the counts for each unique element.

  • dim ( int , optional ) – the dimension to operate upon. If None , the unique of the flattened input is returned. Otherwise, each of the tensors indexed by the given dimension is treated as one of the elements to apply the unique operation upon. See examples for more details. Default: None

  • Returns

    A tensor or a tuple of tensors containing

  • output ( Tensor ): the output list of unique scalar elements.

  • inverse_indices ( Tensor ): (optional) if return_inverse is True, there will be an additional returned tensor (same shape as input) representing the indices for where elements in the original input map to in the output; otherwise, this function will only return a single tensor.

  • counts ( Tensor ): (optional) if return_counts is True, there will be an additional returned tensor (same shape as output or output.size(dim), if dim was specified) representing the number of occurrences for each unique value or tensor.

  • Example:

    >>> output = torch.unique(torch.tensor([1, 3, 2, 3], dtype=torch.long))
    >>> output
    tensor([1, 2, 3])
    >>> output, inverse_indices = torch.unique(
    ...     torch.tensor([1, 3, 2, 3], dtype=torch.long), sorted=True, return_inverse=True)
    >>> output
    tensor([1, 2, 3])
    >>> inverse_indices
    tensor([0, 2, 1, 2])
    >>> output, inverse_indices = torch.unique(
    ...     torch.tensor([[1, 3], [2, 3]], dtype=torch.long), sorted=True, return_inverse=True)
    >>> output
    tensor([1, 2, 3])
    >>> inverse_indices
    tensor([[0, 2],
            [1, 2]])
    >>> a = torch.tensor([
    ...     [
    ...         [1, 1, 0, 0],
    ...         [1, 1, 0, 0],
    ...         [0, 0, 1, 1],
    ...     ],
    ...     [
    ...         [0, 0, 1, 1],
    ...         [0, 0, 1, 1],
    ...         [1, 1, 1, 1],
    ...     ],
    ...     [
    ...         [1, 1, 0, 0],
    ...         [1, 1, 0, 0],
    ...         [0, 0, 1, 1],
    ...     ],
    ... ])
    >>> # If we call `torch.unique(a, dim=0)`, each of the tensors `a[idx, :, :]`
    >>> # will be compared. We can see that `a[0, :, :]` and `a[2, :, :]` match
    >>> # each other, so one of them will be removed.
    >>> (a[0, :, :] == a[2, :, :]).all()
    tensor(True)
    >>> a_unique_dim0 = torch.unique(a, dim=0)
    >>> a_unique_dim0
    tensor([[[0, 0, 1, 1],
             [0, 0, 1, 1],
             [1, 1, 1, 1]],
            [[1, 1, 0, 0],
             [1, 1, 0, 0],
             [0, 0, 1, 1]]])
    >>> # Notice which sub-tensors from `a` match with the sub-tensors from
    >>> # `a_unique_dim0`:
    >>> (a_unique_dim0[0, :, :] == a[1, :, :]).all()
    tensor(True)
    >>> (a_unique_dim0[1, :, :] == a[0, :, :]).all()
    tensor(True)
    >>> # For `torch.unique(a, dim=1)`, each of the tensors `a[:, idx, :]` are
    >>> # compared. `a[:, 0, :]` and `a[:, 1, :]` match each other, so one of
    >>> # them will be removed.
    >>> (a[:, 0, :] == a[:, 1, :]).all()
    tensor(True)
    >>> torch.unique(a, dim=1)
    tensor([[[0, 0, 1, 1],
             [1, 1, 0, 0]],
            [[1, 1, 1, 1],
             [0, 0, 1, 1]],
            [[0, 0, 1, 1],
             [1, 1, 0, 0]]])
    >>> # For `torch.unique(a, dim=2)`, the tensors `a[:, :, idx]` are compared.
    >>> # `a[:, :, 0]` and `a[:, :, 1]` match each other. Also, `a[:, :, 2]` and
    >>> # `a[:, :, 3]` match each other as well. So in this case, two of the
    >>> # sub-tensors will be removed.
    >>> (a[:, :, 0] == a[:, :, 1]).all()
    tensor(True)
    >>> (a[:, :, 2] == a[:, :, 3]).all()
    tensor(True)
    >>> torch.unique(a, dim=2)
    tensor([[[0, 1],
             [0, 1],
             [1, 0]],
            [[1, 0],
             [1, 0],
             [1, 1]],
            [[0, 1],
             [0, 1],
             [1, 0]]])
              For web site terms of use, trademark policy and other policies applicable to The PyTorch Foundation please see
              www.linuxfoundation.org/policies/. The PyTorch Foundation supports the PyTorch open source
              project, which has been established as PyTorch Project a Series of LF Projects, LLC. For policies applicable to the PyTorch Project a Series of LF Projects, LLC,
              please see www.lfprojects.org/policies/.