偷偷摘套内射激情视频,久久精品99国产国产精,中文字幕无线乱码人妻,中文在线中文a,性爽19p

終于把機(jī)器學(xué)習(xí)中的損失函數(shù)搞懂了?。。?/h1>

人工智能 機(jī)器學(xué)習(xí)
Huber Loss 是介于 MSE 和 MAE 之間的一種損失函數(shù),當(dāng)誤差較小時(shí),它像 MSE 一樣處理,而當(dāng)誤差較大時(shí),它像 MAE 一樣處理。

圖片1. Mean Squared Error (MSE)

MSE 是回歸任務(wù)中最常用的損失函數(shù)之一。

它衡量模型預(yù)測(cè)值與實(shí)際值之間的平均平方誤差。

公式:

特點(diǎn):

  • 對(duì)于大的誤差,MSE 會(huì)給出更大的懲罰,因?yàn)檎`差被平方。
  • 對(duì)于異常值較為敏感。
import tensorflow as tf
import matplotlib.pyplot as plt

class MeanSquaredError_Loss:
    """
    This class provides two methods to calculate Mean Squared Error Loss.
    """
    def __init__(self):
        pass

    @staticmethod
    def mean_squared_error_manual(y_true, y_pred):
        
        squared_difference = tf.square(y_true - y_pred)
        loss = tf.reduce_mean(squared_difference)
        return loss

    @staticmethod
    def mean_squared_error_tf(y_true, y_pred):
        
        mse = tf.keras.losses.MeanSquaredError()
        loss = mse(y_true, y_pred)
        return loss

if __name__ == "__main__":
    def mean_squared_error_test(N=10, C=10):
    
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)


        # Test the MeanSquaredError_Loss class
        mse_manual = MeanSquaredError_Loss.mean_squared_error_manual(y_true, y_pred)
        print(f"mean_squared_error_manual: {mse_manual}")

        mse_tf = MeanSquaredError_Loss.mean_squared_error_tf(y_true, y_pred)
        print(f"mean_squared_error_tensorflow: {mse_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nMean Squared Error: {mse_manual.numpy():.4f}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    mean_squared_error_test()

圖片圖片

2. Mean Absolute Error (MAE)

MAE 也是用于回歸任務(wù)的損失函數(shù),它計(jì)算的是預(yù)測(cè)值與實(shí)際值之間誤差的絕對(duì)值的平均值。

公式:

特點(diǎn):

  • MAE 不像 MSE 那樣對(duì)異常值敏感,因?yàn)樗鼪]有平方誤差。
  • 更加直觀,直接反映了誤差的平均大小。
import tensorflow as tf
import matplotlib.pyplot as plt

class MeanAbsoluteError_Loss:
    """
    This class provides two methods to calculate Mean Absolute Error Loss.
    """
    def __init__(self):
        pass

    @staticmethod
    def mean_absolute_error_manual(y_true, y_pred):
        absolute_difference = tf.math.abs(y_true - y_pred)
        loss = tf.reduce_mean(absolute_difference)
        return loss

    @staticmethod
    def mean_absolute_error_tf(y_true, y_pred):
        mae = tf.keras.losses.MeanAbsoluteError()
        loss = mae(y_true, y_pred)
        return loss

if __name__ == "__main__":
    def mean_absolute_error_test(N=10, C=10):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)


        # Test the MeanabsoluteError_Loss class
        mae_manual = MeanAbsoluteError_Loss.mean_absolute_error_manual(y_true, y_pred)
        print(f"mean_absolute_error_manual: {mae_manual}")

        mae_tf = MeanAbsoluteError_Loss.mean_absolute_error_tf(y_true, y_pred)
        print(f"mean_absolute_error_tensorflow: {mae_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nMean Absolute Error: {mae_manual.numpy():.4f}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    mean_absolute_error_test()

圖片圖片

3. Huber Loss

Huber Loss 是介于 MSE 和 MAE 之間的一種損失函數(shù),當(dāng)誤差較小時(shí),它像 MSE 一樣處理,而當(dāng)誤差較大時(shí),它像 MAE 一樣處理。

這樣可以在處理異常值時(shí)更穩(wěn)定。

公式:

特點(diǎn):

  • 對(duì)異常值更具有魯棒性,同時(shí)保留了誤差較小時(shí)的敏感性。
import tensorflow as tf
import matplotlib.pyplot as plt

class Huber_Loss:
    """
    This class provides two methods to calculate Huber Loss.
    """
    def __init__(self, delta = 1.0):
        
        self.delta = delta

    def huber_loss_manual(self, y_true, y_pred):
        
        error = tf.math.abs(y_true - y_pred)
        is_small_error = tf.math.less_equal(error, self.delta)
        small_error_loss = tf.math.square(error) / 2
        large_error_loss = self.delta * (error - (0.5 * self.delta))
        loss = tf.where(is_small_error, small_error_loss, large_error_loss)
        loss = tf.reduce_mean(loss)
        return loss

    def huber_loss_tf(self, y_true, y_pred):
        
        huber_loss = tf.keras.losses.Huber(delta = self.delta)(y_true, y_pred)
        return huber_loss

if __name__ == "__main__":
    def huber_loss_test(N=10, C=10):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)


        # Test the Huber_Loss class
        huber = Huber_Loss() 
        hl_manual = huber.huber_loss_manual(y_true, y_pred)
        print(f"huber_loss_manual: {hl_manual}")

        hl_tf = huber.huber_loss_tf(y_true, y_pred)
        print(f"huber_loss_tensorflow: {hl_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nHuber Loss: {hl_manual.numpy():.4f}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    huber_loss_test()

圖片圖片

4. Cross-Entropy Loss

Cross-Entropy Loss 是分類任務(wù)中廣泛使用的損失函數(shù),尤其是在二分類和多分類問(wèn)題中。

它衡量的是模型輸出的概率分布與實(shí)際類別的分布之間的差異。

公式:

對(duì)于二分類問(wèn)題:

特點(diǎn):

  • 當(dāng)預(yù)測(cè)概率與實(shí)際標(biāo)簽匹配時(shí),損失較低;否則損失較高。
  • 對(duì)于分類問(wèn)題的優(yōu)化尤為有效。
import tensorflow as tf
import matplotlib.pyplot as plt

class Cross_Entropy_Loss:
    """
    This class provides two methods to calculate Cross-Entropy Loss.
    """
    def __init__(self):
        pass

    def cross_entropy_loss_manual(self, y_true, y_pred):
        y_pred /= tf.reduce_sum(y_pred)
        epsilon = tf.keras.backend.epsilon()
        y_pred_new = tf.clip_by_value(y_pred, epsilon, 1.)
        loss =  - tf.reduce_sum(y_true * tf.math.log(y_pred_new))
        return loss 

    def cross_entropy_loss_tf(self, y_true, y_pred):
        loss = tf.keras.losses.categorical_crossentropy(y_true, y_pred)
        return loss

if __name__ == "__main__":
    def cross_entropy_loss_test(N=10, C=1):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.float32)


        # Test the Cross-Entropy_Loss class
        cross_entropy = Cross_Entropy_Loss() 
        ce_manual = cross_entropy.cross_entropy_loss_manual(y_true, y_pred)
        print(f"cross_entropy_loss_manual: {ce_manual}")

        ce_tf = cross_entropy.cross_entropy_loss_tf(y_true, y_pred)
        print(f"cross_entropy_loss_tensorflow: {ce_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nCross-Entropy Loss: {ce_manual.numpy():.4f}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    cross_entropy_loss_test()

5. Hinge Loss

Hinge Loss 通常用于支持向量機(jī)(SVM)中。

它鼓勵(lì)模型使得正確類別的得分高于錯(cuò)誤類別至少一個(gè)邊距(通常是1)。

公式:

特點(diǎn):

  • 強(qiáng)制模型為正確類別創(chuàng)造一個(gè)“邊距”,使得分類更加魯棒。
  • 適用于線性分類器的優(yōu)化。
import tensorflow as tf
import matplotlib.pyplot as plt

class Hinge_Loss:
    """
    This class provides two methods to calculate Hinge Loss.
    """
    def __init__(self):
        pass

    def hinge_loss_manual(self, y_true, y_pred):
        
        pos = tf.reduce_sum(y_true * y_pred, axis=-1)
        neg = tf.reduce_max((1 - y_true) * y_pred, axis=-1)
        loss = tf.maximum(0, neg - pos + 1)
        return loss 

    def hinge_loss_tf(self, y_true, y_pred):
        
        loss = tf.keras.losses.CategoricalHinge()(y_true, y_pred)
        return loss

if __name__ == "__main__":
    def hinge_loss_test(N=10, C=10):
       
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.int32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.int32)


        # Test the Hinge_Loss class
        cross_entropy = Hinge_Loss() 
        hl_manual = cross_entropy.hinge_loss_manual(y_true, y_pred)
        print(f"hinge_loss_manual: {hl_manual}")

        hl_tf = cross_entropy.hinge_loss_tf(y_true, y_pred)
        print(f"hinge_loss_tensorflow: {hl_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([-C, C], [-C, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nHinge Loss: {hl_manual.numpy()}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    hinge_loss_test()

6. Intersection Over Union (IoU)

IoU 通常用于目標(biāo)檢測(cè)任務(wù)中,衡量預(yù)測(cè)的邊界框與實(shí)際邊界框之間的重疊程度。

公式:

特點(diǎn):

  • 值域在0到1之間,1表示完美重疊,0表示沒有重疊。
  • 用于評(píng)估邊界框預(yù)測(cè)的準(zhǔn)確性。
import tensorflow as tf
import matplotlib.pyplot as plt

class IOU:
    def __init__(self):
        pass

    def IOU_manual(self, y_true, y_pred):
        intersection = tf.reduce_sum(tf.cast(tf.logical_and(tf.equal(y_true, 1), tf.equal(y_pred, 1)), dtype=tf.float32))
        union = tf.reduce_sum(tf.cast(tf.logical_or(tf.equal(y_true, 1), tf.equal(y_pred, 1)), dtype=tf.float32))
        iou = intersection / union
        return iou

    def IOU_tf(self, y_true, y_pred):
        iou_metric = tf.keras.metrics.IoU(num_classes=2, target_class_ids=[1])
        iou_metric.update_state(y_true, y_pred)
        iou = iou_metric.result()
        return iou

if __name__ == "__main__":
    def IOU_test(N=10, C=10):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.int32)
        y_pred = tf.random.uniform(shape=(N, ), minval=-C, maxval=C, dtype=tf.int32)

        y_true = tf.constant([[0, 1, 1, 0], 
                              [0, 1, 1, 0], 
                              [0, 0, 0, 0], 
                              [0, 0, 0, 0]], dtype=tf.float32)  # Example binary mask (ground truth)

        y_pred = tf.constant([[0, 1, 1, 0], 
                              [1, 1, 0, 0], 
                              [0, 0, 0, 0], 
                              [0, 0, 0, 0]], dtype=tf.float32)  # Example binary mask (prediction)

        iou = IOU()

        iou_manual = iou.IOU_manual(y_true, y_pred)
        print(f"IOU_manual: {iou_manual}")

        iou_tf = iou.IOU_tf(y_true, y_pred)
        print(f"IOU_tensorflow: {iou_tf}")

    IOU_test()

7. Kullback-Leibler (KL) Divergence

KL 散度是一種衡量?jī)蓚€(gè)概率分布之間差異的非對(duì)稱性度量,通常用于生成模型和變分自編碼器中。

公式:

特點(diǎn):

  • 當(dāng) P 和 Q 完全相同時(shí),KL 散度為0。
  • 適用于評(píng)估模型預(yù)測(cè)的概率分布與目標(biāo)概率分布之間的差異。
import tensorflow as tf
import matplotlib.pyplot as plt

class Kullback_Leibler:
    """
    This class provides two methods to calculate Kullback-Leibler Loss.
    """
    def __init__(self):
        pass

    def kullback_leibler_manual(self, y_true, y_pred):
        epsilon = tf.keras.backend.epsilon()
        y_true = tf.clip_by_value(y_true, epsilon, 1)
        y_pred = tf.clip_by_value(y_pred, epsilon, 1)
        
        loss = tf.reduce_sum(y_true * tf.math.log(y_true / y_pred), axis=-1)
        return loss

    def kullback_leibler_tf(self, y_true, y_pred):
        loss = tf.reduce_sum(tf.keras.losses.KLDivergence()(y_true, y_pred))
        return loss

if __name__ == "__main__":
    def kullback_leibler_test(N=5, C=1):
        # Generate random data
        y_true = tf.random.uniform(shape=(N, ), minval=0, maxval=C, dtype=tf.float32)
        y_pred = tf.random.uniform(shape=(N, ), minval=0, maxval=C, dtype=tf.float32)

        #converting them to probabilities
        y_true /= tf.reduce_sum(y_true)
        y_pred /= tf.reduce_sum(y_pred)

        # Test the kullback_leibler class
        kl = Kullback_Leibler() 
        kl_manual = kl.kullback_leibler_manual(y_true, y_pred)
        print(f"kullback_leibler_manual: {kl_manual}")

        kl_tf = kl.kullback_leibler_tf(y_true, y_pred)
        print(f"kullback_leibler_tensorflow: {kl_tf}")
        print()

        # Plot the points on a graph
        plt.figure(figsize=(8, 6))
        plt.scatter(y_true.numpy(), y_pred.numpy(), color='blue', label='Predicted vs Actual')
        plt.plot([0, C], [0, C], 'r--', label='Ideal Line')  # Diagonal line representing ideal predictions

        plt.title(f"Predictions vs Actuals\nKullback-Leibler Loss: {kl_manual.numpy()}")
        plt.xlabel('Actual Values')
        plt.ylabel('Predicted Values')
        plt.legend()
        plt.grid(True)
        plt.show()

    kullback_leibler_test()

圖片圖片


責(zé)任編輯:武曉燕 來(lái)源: 小寒聊python
相關(guān)推薦

2024-09-18 16:42:58

機(jī)器學(xué)習(xí)評(píng)估指標(biāo)模型

2024-08-23 09:06:35

機(jī)器學(xué)習(xí)混淆矩陣預(yù)測(cè)

2024-10-14 14:02:17

機(jī)器學(xué)習(xí)評(píng)估指標(biāo)人工智能

2024-10-08 15:09:17

2024-10-08 10:16:22

2024-10-28 00:00:10

機(jī)器學(xué)習(xí)模型程度

2024-10-30 08:23:07

2025-01-20 09:21:00

2024-12-26 00:34:47

2024-10-28 15:52:38

機(jī)器學(xué)習(xí)特征工程數(shù)據(jù)集

2025-01-15 11:25:35

2025-01-20 09:00:00

2025-01-07 12:55:28

2024-11-25 08:20:35

2025-02-17 13:09:59

深度學(xué)習(xí)模型壓縮量化

2024-10-16 07:58:48

2024-12-03 08:16:57

2024-08-01 08:41:08

2024-07-24 08:04:24

神經(jīng)網(wǎng)絡(luò)激活函數(shù)

2024-11-07 08:26:31

神經(jīng)網(wǎng)絡(luò)激活函數(shù)信號(hào)
點(diǎn)贊
收藏

51CTO技術(shù)棧公眾號(hào)