在scikit-learn中有三种类型的朴素贝叶斯,也是机器学习中最常见的朴素贝叶斯:贝努利朴素贝叶斯 (Bernoulli Naive Bayes)、高斯朴素贝叶斯 (Gaussian Naive Bayes)、多项式朴素贝叶斯 (Multinomial Naive Bayes)。
from sklearn.datasets import make_blobs
from sklearn.model_selection import train_test_split
# 生成样本数为 50000, 分类数为 2的数据集,并按照 75%:25% 的比例进行拆分
X, y = make_blobs(n_samples = 500, centers = 5, random_state = 8)
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 8)
2.1 贝努利朴素贝叶斯 (BernoulliNB)¶
贝努利分布也称为"二项分布",或者"0-1分布"。对于随机变量 X,如果 X 的取值只能为 0 或 1,即 X = {0, 1} 则称随机变量X满足贝努利分布,其相应的概率为:
$ f(x)=\left\{
\begin{aligned}
P(x = 1) & = & p \\
p(x = 0) & = & 1 - p
\end{aligned}
\right. $
$ s.t. 0 < p < 1 $
贝努利朴素贝叶斯是一种比较适合于符合贝努利分布的贝叶斯算法,具体而言就是那些只有两种可能的实验,例如:正面或反面,成功或失败,有缺陷或没有缺陷,病人康复或未康复等。
值得注意的是,虽然贝努利朴素贝叶斯擅长于二分类问题,但是并不代表它不能做多分类,只是性能不那么理想而已。
# 默认情况下,二值化参数binarize = 0, 通过调整二值化参数可以优化模型
nb = BernoulliNB()
#nb = BernoulliNB(binarize = 4)
nb.fit(X_train, y_train)
# 输出模型评分,即正确率
score = nb.score(X_test, y_test)
print("模型评分:{:.3f}".format(score))
# 设置图的分辨率为 100像素
plt.figure(dpi = 100)
plt.rcParams['font.sans-serif'] = [u'Microsoft YaHei']
# 设置坐标轴的范围
x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5
y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5
# 用不同背景颜色表示不同的类别
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02), np.arange(y_min, y_max, 0.02))
z = nb.predict(np.c_[(xx.ravel(), yy.ravel())]).reshape(xx.shape)
plt.pcolormesh(xx, yy, z, cmap = plt.cm.Pastel1)
# 将训练集和测试集用散点图表示出来
plt.scatter(X_train[:, 0], X_train[:, 1], c = y_train, cmap = plt.cm.cool, edgecolor = "k")
plt.scatter(X_test[:, 0], X_test[:, 1], c = y_test, cmap = plt.cm.cool, marker = "*")
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# 设置图标题
plt.title("分类器:贝努利朴素贝叶斯 (BernoulliNB)")
plt.show()
2.2 高斯朴素贝叶斯 (Gaussian)¶
高斯贝叶斯假设样本的特征符合高斯分布,或者说符合正态分布。事实上,自然界的大多数事物都基本满足这个规律。
【知识点】[高斯分布(正态分布)](knowledgement/GaussianDistribution.ipynb)
下面我们尝试使用高斯朴素贝叶斯来拟合之前生成的数据,看看其准确率为多少
gnb = GaussianNB()
gnb.
fit(X_train, y_train)
score = gnb.score(X_test, y_test)
print("模型评分:{:.3f}".format(score))
# 设置坐标轴的范围
x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5
y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5
# 配置参数使 matplotlib绘图工具可以显示中文
# 设置图的分辨率为150像素
plt.figure(dpi = 100)
plt.rcParams['font.sans-serif'] = [u'Microsoft YaHei']
# 用不同背景颜色表示不同的类别
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02), np.arange(y_min, y_max, 0.02))
z = gnb.predict(np.c_[(xx.ravel(), yy.ravel())]).reshape(xx.shape)
plt.pcolormesh(xx, yy, z, cmap = plt.cm.Pastel1)
# 将训练集和测试集用散点图表示出来
plt.scatter(X_train[:, 0], X_train[:, 1], c = y_train, cmap = plt.cm.cool, edgecolor = "k")
plt.scatter(X_test[:, 0], X_test[:, 1], c = y_test, cmap = plt.cm.cool, marker = "*")
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# 设置图标题
plt.title("分类器:高斯朴素贝叶斯 (GaussianNB)")
plt.show()
mnb = MultinomialNB()
mnb.fit(X_train, y_train)
score = mnb.score(X_test, y_test)
print("模型评分:{:.3f}".format(score))
---------------------------------------------------------------------------
ValueError Traceback (most recent call last)
<ipython-input-11-833577799ae7> in <module>
3 # 使用多项式贝叶斯拟合数据
4 mnb = MultinomialNB()
----> 5 mnb.fit(X_train, y_train)
6 score = mnb.score(X_test, y_test)
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\naive_bayes.py in fit(self, X, y, sample_weight)
611 self.feature_count_ = np.zeros((n_effective_classes, n_features),
612 dtype=np.float64)
--> 613 self._count(X, Y)
614 alpha = self._check_alpha()
615 self._update_feature_log_prob(alpha)
C:\ProgramData\Anaconda3\lib\site-packages\sklearn\naive_bayes.py in _count(self, X, Y)
718 """Count and smooth feature occurrences."""
719 if np.any((X.data if issparse(X) else X) < 0):
--> 720 raise ValueError("Input X must be non-negative")
721 self.feature_count_ += safe_sparse_dot(Y.T, X)
722 self.class_count_ += Y.sum(axis=0)
ValueError: Input X must be non-negative
运行结果报错信息如下:
ValueError
: Input $X$ must be non-negative
意思是,输入的样本特征不能为负值(non-negative), 这是多项贝叶斯的一种
强制要求
。
下面,我们先使用 MinMaxScaler()函数对输入数据进行归一化,即:将样本 $X$ 的特征约束到[0, 1]之间。
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
# 设置坐标轴的范围
x_min, x_max = X[:, 0].min() - 0.5, X[:, 0].max() + 0.5
y_min, y_max = X[:, 1].min() - 0.5, X[:, 1].max() + 0.5
# 配置参数使 matplotlib绘图工具可以显示中文
# 设置图的分辨率为150像素
plt.figure(dpi = 100)
plt.rcParams['font.sans-serif'] = [u'Microsoft YaHei']
# 用不同背景颜色表示不同的类别
xx, yy = np.meshgrid(np.arange(x_min, x_max, 0.02), np.arange(y_min, y_max, 0.02))
z = mnb.predict(np.c_[(xx.ravel(), yy.ravel())]).reshape(xx.shape)
plt.pcolormesh(xx, yy, z, cmap = plt.cm.Pastel1)
# 将训练集和测试集用散点图表示出来
plt.scatter(X_train[:, 0], X_train[:, 1], c = y_train, cmap = plt.cm.cool, edgecolor = "k")
plt
.scatter(X_test[:, 0], X_test[:, 1], c = y_test, cmap = plt.cm.cool, marker = "*")
plt.xlim(xx.min(), xx.max())
plt.ylim(yy.min(), yy.max())
# 设置图标题
plt.title("分类器:多项式朴素贝叶斯 (MultinomialNB)")
plt.show()
3.1 载入数据集¶
威斯康星乳腺肿瘤数据集 (Breast_Cancer)是一个来自于真实世界的数据集,它包含569个病例样本,每个样本具有30个特征。所有样本都被分为两类:恶性(Malignant)和良性(Benign)。
n = 569
n_feature = 30
y = {0, 1} : {0: 恶性, 1: 良性}
:Number of Instances: 569
:Number of Attributes: 30 numeric, predictive attributes and the class
:Attribute Information:
- radius (mean of distances from center to points on the perimeter)
- texture (standard deviation of gray-scale values)
- perimeter
- area
- smoothness (local variation in radius lengths)
- compactness (perimeter^2 / area - 1.0)
- concavity (severity of concave portions of the contour)
- concave points (number of concave portions of the contour)
- symmetry
- fractal dimension ("coastline approximation" - 1)
The mean, standard error, and "worst" or largest (mean of the three
largest values) of these features were computed for each image,
resulting in 30 features. For instance, field 3 is Mean Radius, field
13 is Radius SE, field 23 is Worst Radius.
- class:
- WDBC-Malignant
- WDBC-Benign
:Summary Statistics:
===================================== ====== ======
Min Max
===================================== ====== ======
radius (mean): 6.981 28.11
texture (mean): 9.71 39.28
perimeter (mean): 43.79 188.5
area (mean): 143.5 2501.0
smoothness (mean): 0.053 0.163
compactness (mean): 0.019 0.345
concavity (mean): 0.0 0.427
concave points (mean): 0.0 0.201
symmetry (mean): 0.106 0.304
fractal dimension (mean): 0.05 0.097
radius (standard error): 0.112 2.873
texture (standard error): 0.36 4.885
perimeter (standard error): 0.757 21.98
area (standard error): 6.802 542.2
smoothness (standard error): 0.002 0.031
compactness (standard error): 0.002 0.135
concavity (standard error): 0.0 0.396
concave points (standard error): 0.0 0.053
symmetry (standard error): 0.008 0.079
fractal dimension (standard error): 0.001 0.03
radius (worst): 7.93 36.04
texture (worst): 12.02 49.54
perimeter (worst): 50.41 251.2
area (worst): 185.2 4254.0
smoothness (worst): 0.071 0.223
compactness (worst): 0.027 1.058
concavity (worst): 0.0 1.252
concave points (worst): 0.0 0.291
symmetry (worst): 0.156 0.664
fractal dimension (worst): 0.055 0.208
===================================== ====== ======
:Missing Attribute Values: None
:Class Distribution: 212 - Malignant, 357 - Benign
:Creator: Dr. William H. Wolberg, W. Nick Street, Olvi L. Mangasarian
:Donor: Nick Street
:Date: November, 1995
This is a copy of UCI ML Breast Cancer Wisconsin (Diagnostic) datasets.
https://goo.gl/U2Uwz2
Features are computed from a digitized image of a fine needle
aspirate (FNA) of a breast mass. They describe
characteristics of the cell nuclei present in the image.
Separating plane described above was obtained using
Multisurface Method-Tree (MSM-T) [K. P. Bennett, "Decision Tree
Construction Via Linear Programming." Proceedings of the 4th
Midwest Artificial Intelligence and Cognitive Science Society,
pp. 97-101, 1992], a classification method which uses linear
programming to construct a decision tree. Relevant features
were selected using an exhaustive search in the space of 1-4
features and 1-3 separating planes.
The actual linear program used to obtain the separating plane
in the 3-dimensional space is that described in:
[K. P. Bennett and O. L. Mangasarian: "Robust Linear
Programming Discrimination of Two Linearly Inseparable Sets",
Optimization Methods and Software 1, 1992, 23-34].
This database is also available through the UW CS ftp server:
ftp ftp.cs.wisc.edu
cd math-prog/cpo-dataset/machine-learn/WDBC/
.. topic:: References
- W.N. Street, W.H. Wolberg and O.L. Mangasarian. Nuclear feature extraction
for breast tumor diagnosis. IS&T/SPIE 1993 International Symposium on
Electronic Imaging: Science and Technology, volume 1905, pages 861-870,
San Jose, CA, 1993.
- O.L. Mangasarian, W.N. Street and W.H. Wolberg. Breast cancer diagnosis and
prognosis via linear programming. Operations Research, 43(4), pages 570-577,
July-August 1995.
- W.H. Wolberg, W.N. Street, and O.L. Mangasarian. Machine learning techniques
to diagnose breast cancer from fine-needle aspirates. Cancer Letters 77 (1994)
163-171.
data: 样本的特征,每个样本都包含一个30维的数组构成的特征
target: 样本的标签,即分类值,取值范围为{0, 1}, 0表示恶性,1表示良性
target_names: 标签的名字,即 ['malignant', 'benign']
DESCR: 该数据集的描述
feature_names: 特征的名称(30维的数组)
filename: 该数据集的文件名,默认值为:"C:\ProgramData\Anaconda3\lib\site-packages\sklearn\datasets\data\breast_cancer.csv:
特征名称:
['mean radius' 'mean texture' 'mean perimeter' 'mean area'
'mean smoothness' 'mean compactness' 'mean concavity'
'mean concave points' 'mean symmetry' 'mean fractal dimension'
'radius error' 'texture error' 'perimeter error' 'area error'
'smoothness error' 'compactness error' 'concavity error'
'concave points error' 'symmetry error' 'fractal dimension error'
'worst radius' 'worst texture' 'worst perimeter' 'worst area'
'worst smoothness' 'worst compactness' 'worst concavity'
'worst concave points' 'worst symmetry' 'worst fractal dimension']
from sklearn.model_selection import train_test_split
X, y = cancer.data, cancer.target
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state = 38)
from sklearn.naive_bayes import BernoulliNB
from sklearn.naive_bayes import GaussianNB
from sklearn.naive_bayes import MultinomialNB
# 载入预处理,并对样本进行归一化
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
scaler.fit(X_train)
X_train_scaled = scaler.transform(X_train)
X_test_scaled = scaler.transform(X_test)
bnb = BernoulliNB()
bnb_norm = BernoulliNB()
gnb = GaussianNB()
gnb_norm = GaussianNB()
mnb = MultinomialNB()
bnb.fit(X_train, y_train)
bnb_norm.fit(X_train_scaled, y_train)
gnb.fit(X_train, y_train)
gnb_norm.fit(X_train_scaled, y_train)
mnb.fit(X_train_scaled, y_train)
score_train_bnb = bnb.score(X_train, y_train)
score_train_gnb = gnb.score(X_train, y_train)
score_train_bnb_norm = bnb_norm.score(X_train_scaled, y_train)
score_train_gnb_norm = gnb_norm.score(X_train_scaled, y_train)
score_train_mnb_norm = mnb.score(X_train_scaled, y_train)
score_test_bnb = bnb_norm.score(X_test, y_test)
score_test_gnb = gnb.score(X_test, y_test)
score_test_bnb_norm = bnb.score(X_test_scaled, y_test)
score_test_gnb_norm = gnb_norm.score(X_test_scaled, y_test)
score_test_mnb_norm = mnb.score(X_test_scaled, y_test)
print("BernoulliNB模型,训练集评分: {0:.3f}, 测试集评分: {1:3f}".format(score_train_bnb, score_test_bnb))
print("BernoulliNB模型(归一化),训练集评分: {0:.3f}, 测试集评分: {1:3f}".format(score_train_bnb_norm, score_test_bnb_norm))
print("GaussianNB模型,训练集评分: {0:.3f}, 测试集评分: {1:3f}".format(score_train_gnb, score_test_gnb))
print("GaussianNB模型,训练集评分(归一化): {0:.3f}, 测试集评分: {1:3f}".format(score_train_gnb_norm, score_test_gnb_norm))
print("MultinomialNB模型,训练集评分: {0:.3f}, 测试集评分: {1:3f}".format(score_train_mnb_norm, score_test_mnb_norm))
BernoulliNB模型,训练集评分: 0.613, 测试集评分: 0.671329
BernoulliNB模型(归一化),训练集评分: 0.622, 测试集评分: 0.657343
GaussianNB模型,训练集评分: 0.948, 测试集评分: 0.944056
GaussianNB模型,训练集评分(归一化): 0.946, 测试集评分: 0.937063
MultinomialNB模型,训练集评分: 0.857, 测试集评分: 0.860140
print("样本的正确分类为: {}".format(data_y))
print("BernoulliNB模型预测的分类是: {}".format(bnb.predict(data_x)[0]))
print("GaussianNB模型预测的分类是: {}".format(gnb.predict(data_x)[0]))
print("MultinomialNB模型预测的分类是: {}".format(mnb.predict(data_x)[0]))
学习曲线 (Learning Curve) 是机器学习应用中进行**性能分析**时**极为重要**的参考指标,它可以用来分析各种参数对预测结果的影响,例如: 样本数、学习率等对预测准确率的影响。
下面将给出预测样本数对预测准确率的影响:
此处只做演示,实现方法将在第11章给出
from sklearn.model_selection import learning_curve
from sklearn.model_selection import ShuffleSplit
import matplotlib.pyplot as plt
def plot_learning_curve(estimator, title, X, y, ylim=None, cv=None,
n_jobs=1, train_sizes=np.linspace(.1, 1.0, 5)):
plt.figure()
plt.title(title)
if ylim is not None:
plt.ylim(*ylim)
plt.xlabel("Training examples")
plt.ylabel("Score")
train_sizes, train_scores, test_scores = learning_curve(
estimator, X, y, cv=cv, n_jobs=n_jobs, train_sizes=train_sizes)
train_scores_mean = np.mean(train_scores, axis=1)
test_scores_mean = np.mean(test_scores, axis=1)
plt.grid()
plt.plot(train_sizes, train_scores_mean, 'o-', color="r",
label="Training score")
plt.plot(train_sizes, test_scores_mean, 'o-', color="g",
label="Cross-validation score")
plt.legend(loc="lower right")
return plt
title = "Learning Curves (Naive Bayes)"
cv = ShuffleSplit(n_splits=100, test_size=0.2, random_state=0)
estimator = GaussianNB()
plot_learning_curve(estimator, title, X, y, ylim=(0.9, 1.01), cv=cv, n_jobs=4)
plt.show()