添加链接
link管理
链接快照平台
  • 输入网页链接,自动生成快照
  • 标签化管理网页链接
相关文章推荐
另类的汤圆  ·  SQL Server 中Select ...·  2 年前    · 
慈祥的萝卜  ·  Ansible-playbook ...·  2 年前    · 
  • Complex Scale-Invariant Signal-to-Noise Ratio (C-SI-SNR)
  • Deep Noise Suppression Mean Opinion Score (DNSMOS)
  • Non-Intrusive Speech Quality Assessment (NISQA v2.0)
  • Perceptual Evaluation of Speech Quality (PESQ)
  • Permutation Invariant Training (PIT)
  • Scale-Invariant Signal-to-Distortion Ratio (SI-SDR)
  • Scale-Invariant Signal-to-Noise Ratio (SI-SNR)
  • Short-Time Objective Intelligibility (STOI)
  • Signal to Distortion Ratio (SDR)
  • Signal-to-Noise Ratio (SNR)
  • Source Aggregated Signal-to-Distortion Ratio (SA-SDR)
  • Speech-to-Reverberation Modulation Energy Ratio (SRMR)
  • Classification

  • Accuracy
  • AUROC
  • Average Precision
  • Calibration Error
  • Cohen Kappa
  • Confusion Matrix
  • Coverage Error
  • Expected Error Rate (EER)
  • Exact Match
  • F-1 Score
  • F-Beta Score
  • Group Fairness
  • Hamming Distance
  • Hinge Loss
  • Jaccard Index
  • Label Ranking Average Precision
  • Label Ranking Loss
  • Log AUC
  • Matthews Correlation Coefficient
  • Negative Predictive Value
  • Precision
  • Precision At Fixed Recall
  • Precision Recall Curve
  • Recall
  • Recall At Fixed Precision
  • Sensitivity At Specificity
  • Specificity
  • Specificity At Sensitivity
  • Stat Scores
  • Clustering

  • Adjusted Mutual Information Score
  • Adjusted Rand Score
  • Calinski Harabasz Score
  • Cluster Accuracy
  • Completeness Score
  • Davies Bouldin Score
  • Dunn Index
  • Fowlkes-Mallows Index
  • Homogeneity Score
  • Mutual Information Score
  • Normalized Mutual Information Score
  • Rand Score
  • V-Measure Score
  • Detection

  • Complete Intersection Over Union (cIoU)
  • Distance Intersection Over Union (dIoU)
  • Generalized Intersection Over Union (gIoU)
  • Intersection Over Union (IoU)
  • Mean-Average-Precision (mAP)
  • Modified Panoptic Quality
  • Panoptic Quality
  • Image

  • ARNIQA
  • Deep Image Structure And Texture Similarity (DISTS)
  • Error Relative Global Dim. Synthesis (ERGAS)
  • Frechet Inception Distance (FID)
  • Image Gradients
  • Inception Score
  • Kernel Inception Distance
  • Learned Perceptual Image Patch Similarity (LPIPS)
  • Memorization-Informed Frechet Inception Distance (MiFID)
  • Multi-Scale SSIM
  • Peak Signal-to-Noise Ratio (PSNR)
  • Peak Signal To Noise Ratio With Blocked Effect
  • Perceptual Path Length (PPL)
  • Quality with No Reference
  • Relative Average Spectral Error (RASE)
  • Root Mean Squared Error Using Sliding Window
  • Spatial Correlation Coefficient (SCC)
  • Spatial Distortion Index
  • Spectral Angle Mapper
  • Spectral Distortion Index
  • Structural Similarity Index Measure (SSIM)
  • Total Variation (TV)
  • Universal Image Quality Index
  • Visual Information Fidelity (VIF)
  • Multimodal

  • CLIP Image Quality Assessment (CLIP-IQA)
  • CLIP Score
  • Lip Vertex Error
  • Nominal

  • Cramer’s V
  • Fleiss Kappa
  • Pearson’s Contingency Coefficient
  • Theil’s U
  • Tschuprow’s T
  • Pairwise

  • Cosine Similarity
  • Euclidean Distance
  • Linear Similarity
  • Manhattan Distance
  • Minkowski Distance
  • Regression

  • Concordance Corr. Coef.
  • Cosine Similarity
  • Critical Success Index (CSI)
  • Continuous Ranked Probability Score (CRPS)
  • Explained Variance
  • Jensen-Shannon Divergence
  • Kendall Rank Corr. Coef.
  • KL Divergence
  • Log Cosh Error
  • Mean Absolute Error (MAE)
  • Mean Absolute Percentage Error (MAPE)
  • Mean Squared Error (MSE)
  • Mean Squared Log Error (MSLE)
  • Minkowski Distance
  • Normalized Root Mean Squared Error (NRMSE)
  • Pearson Corr. Coef.
  • R2 Score
  • Relative Squared Error (RSE)
  • Spearman Corr. Coef.
  • Symmetric Mean Absolute Percentage Error (SMAPE)
  • Tweedie Deviance Score
  • Weighted MAPE
  • Retrieval

  • Retrieval AUROC
  • Retrieval Fall-Out
  • Retrieval Hit Rate
  • Retrieval Mean Average Precision (MAP)
  • Retrieval Mean Reciprocal Rank (MRR)
  • Retrieval Normalized DCG
  • Retrieval Precision
  • Retrieval Precision Recall Curve
  • Retrieval R-Precision
  • Retrieval Recall
  • Segmentation

  • Dice Score
  • Generalized Dice Score
  • Hausdorff Distance
  • Mean Intersection over Union (mIoU)
  • Shape

  • Procrustes Disparity
  • BERT Score
  • BLEU Score
  • Char Error Rate
  • ChrF Score
  • Edit Distance
  • Extended Edit Distance
  • InfoLM
  • Match Error Rate
  • Perplexity
  • ROUGE Score
  • Sacre BLEU Score
  • SQuAD
  • Translation Edit Rate (TER)
  • Word Error Rate
  • Word Info. Lost
  • Word Info. Preserved
  • Video

  • Video Multi-Method Assessment Fusion (VMAF)
  • Wrappers

  • Bootstrapper
  • Classwise Wrapper
  • Feature Sharing
  • Metric Tracker
  • Min / Max
  • Multi-output Wrapper
  • Multi-task Wrapper
  • Running
  • Transformations
  • API Reference

  • torchmetrics.Metric
  • torchmetrics.utilities
  • Community

  • TorchMetrics Governance
  • Contributor Covenant Code of Conduct
  • Contributing
  • Changelog
  • class torchmetrics. RelativeSquaredError ( num_outputs = 1 , squared = True , ** kwargs ) [source]

    Computes the relative squared error (RSE).

    \[\text{RSE} = \frac{\sum_i^N(y_i - \hat{y_i})^2}{\sum_i^N(y_i - \overline{y})^2}\]

    Where \(y\) is a tensor of target values with mean \(\overline{y}\) , and \(\hat{y}\) is a tensor of predictions.

    If num_outputs > 1, the returned value is averaged over all the outputs.

    As input to forward and update the metric accepts the following input:

  • preds ( Tensor ): Predictions from model in float tensor with shape (N,) or (N, M) (multioutput)

  • target ( Tensor ): Ground truth values in float tensor with shape (N,) or (N, M) (multioutput)

  • As output of forward and compute the metric returns the following output:

  • rse ( Tensor ): A tensor with the RSE score(s)

  • Parameters :
  • num_outputs ( int ) – Number of outputs in multioutput setting

  • squared ( bool ) – If True returns RSE value, if False returns RRSE value.

  • kwargs ( Any ) – Additional keyword arguments, see Advanced metric settings for more info.

  • Example

    >>> from torchmetrics.regression import RelativeSquaredError
    >>> target = torch.tensor([3, -0.5, 2, 7])
    >>> preds = torch.tensor([2.5, 0.0, 2, 8])
    >>> relative_squared_error = RelativeSquaredError()
    >>> relative_squared_error(preds, target)
    tensor(0.0514)
    plot(val=None, ax=None)[source]
    

    Plot a single or multiple values from the metric.

    Parameters:
  • val (Union[Tensor, Sequence[Tensor], None]) – Either a single result from calling metric.forward or metric.compute or a list of these results. If no value is provided, will automatically call metric.compute and plot that result.

  • ax (Optional[Axes]) – An matplotlib axis object. If provided will add plot to that axis

  • Return type:

    tuple[Figure, Union[Axes, ndarray]]

    Returns:

    Figure and Axes object

    Raises:

    ModuleNotFoundError – If matplotlib is not installed

    >>> from torch import randn
    >>> # Example plotting a single value
    >>> from torchmetrics.regression import RelativeSquaredError
    >>> metric = RelativeSquaredError()
    >>> metric.update(randn(10,), randn(10,))
    >>> fig_, ax_ = metric.plot()
    
    >>> from torch import randn
    >>> # Example plotting multiple values
    >>> from torchmetrics.regression import RelativeSquaredError
    >>> metric = RelativeSquaredError()
    >>> values = []
    >>> for _ in range(10):
    ...     values.append(metric(randn(10,), randn(10,)))
    >>> fig, ax = metric.plot(values)
    torchmetrics.functional.relative_squared_error(preds, target, squared=True)[source]
    

    Computes the relative squared error (RSE).

    \[\text{RSE} = \frac{\sum_i^N(y_i - \hat{y_i})^2}{\sum_i^N(y_i - \overline{y})^2}\]

    Where \(y\) is a tensor of target values with mean \(\overline{y}\), and \(\hat{y}\) is a tensor of predictions.

    If preds and targets are 2D tensors, the RSE is averaged over the second dim.

    Parameters:
  • preds (Tensor) – estimated labels

  • target (Tensor) – ground truth labels

  • squared (bool) – returns RRSE value if set to False

  • Return type:

    Tensor

    Returns:

    Tensor with RSE

    Example

    >>> from torchmetrics.functional.regression import relative_squared_error
    >>> target = torch.tensor([3, -0.5, 2, 7])
    >>> preds = torch.tensor([2.5, 0.0, 2, 8])
    >>> relative_squared_error(preds, target)
    tensor(0.0514)
          To analyze traffic and optimize your experience, we serve cookies on this
          site. By clicking or navigating, you agree to allow our usage of cookies.
          Read PyTorch Lightning's
          Privacy Policy.