pythoninline
A. python matlibplot 怎样画图例
用于添加图例的函数是plt.legend(),我们通过例子来对其进行介绍。
%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np
#多数据并列柱状图
mpl.rcParams["font.sans-serif"]=["SimHei"]
mpl.rcParams["axes.unicode_minus"]=False
x = np.arange(6)
y1 = [23,5,14,27,18,14]
y2 = [10,27,25,18,23,16]
tick_label = ["A","B","C","D","E","F"]
bar_width = 0.35
plt.bar(x,y1,bar_width,align="center",label="班级A",alpha=0.5)
plt.bar(x+bar_width,y2,bar_width,align="center",label="班级B",alpha=0.5)
plt.xlabel("成绩等级")
plt.ylabel("人数")
plt.xticks(x+bar_width/2,tick_label)
plt.legend(bbox_to_anchor=(1,1),#图例边界框起始位置
loc="upper right",#图例的位置
ncol=1,#列数
mode="None",#当值设置为“expend”时,图例会水平扩展至整个坐标轴区域
borderaxespad=0,#坐标轴和图例边界之间的间距
title="班级",#图例标题
shadow=False,#是否为线框添加阴影
fancybox=True)#线框圆角处理参数
plt.show()
效果如图所示
B. Python 判断列表内的元素是否在同一直线上的方法
讲一下思路
首先确定输入的值在列表中的位置,用一个元组来表示(1,2)
用一个列表保存在一条直线上的元组位置组合
inLine = [set([(0,0),(0,1),(0,2)]) ,set([(1,0),(1,1),(1,2)])]
如果用户输入3个值得位置在inLine列表中,则可以连成一条直线
使用set的好处在于无需考虑3个值得顺序
C. 利用Python实现卷积神经网络的可视化
在本文中,将探讨如何可视化卷积神经网络(CNN),该网络在计算机视觉中使用最为广泛。首先了解CNN模型可视化的重要性,其次介绍可视化的几种方法,同时以一个用例帮助读者更好地理解模型可视化这一概念。
正如上文中介绍的癌症肿瘤诊断案例所看到的,研究人员需要对所设计模型的工作原理及其功能掌握清楚,这点至关重要。一般而言,一名深度学习研究者应该记住以下几点:
1.1 理解模型是如何工作的
1.2 调整模型的参数
1.3 找出模型失败的原因
1.4 向消费者/终端用户或业务主管解释模型做出的决定
2.可视化CNN模型的方法
根据其内部的工作原理,大体上可以将CNN可视化方法分为以下三类:
初步方法:一种显示训练模型整体结构的简单方法
基于激活的方法:对单个或一组神经元的激活状态进行破译以了解其工作过程
基于梯度的方法:在训练过程中操作前向传播和后向传播形成的梯度
下面将具体介绍以上三种方法,所举例子是使用Keras深度学习库实现,另外本文使用的数据集是由“识别数字”竞赛提供。因此,读者想复现文中案例时,请确保安装好Kears以及执行了这些步骤。
研究者能做的最简单的事情就是绘制出模型结构图,此外还可以标注神经网络中每层的形状及参数。在keras中,可以使用如下命令完成模型结构图的绘制:
model.summary()_________________________________________________________________Layer (type) 滑稿宏 Output Shape Param #
=================================================================conv2d_1 (Conv2D) (None, 26, 26, 32) 320_________________________________________________________________conv2d_2 (Conv2D) 敬茄 (None, 24, 24, 64) 18496_________________________________________________________________max_pooling2d_1 (MaxPooling2 (None, 12, 12, 64) 0_________________________________________________________________dropout_1 (Dropout) (None, 12, 12, 64) 0_________________________________________________________________flatten_1 (Flatten) (None, 9216) 0_________________________________________________________________dense_1 (Dense) (None, 128) 1179776_________________________________________________________________dropout_2 (Dropout) (None, 128) 0_________________________________________________________________preds (Dense) (None, 10) 信册 1290
=================================================================Total params: 1,199,882Trainable params: 1,199,882Non-trainable params: 0
还可以用一个更富有创造力和表现力的方式呈现模型结构框图,可以使用keras.utils.vis_utils函数完成模型体系结构图的绘制。
另一种方法是绘制训练模型的过滤器,这样就可以了解这些过滤器的表现形式。例如,第一层的第一个过滤器看起来像:
top_layer = model.layers[0]plt.imshow(top_layer.get_weights()[0][:, :, :, 0].squeeze(), cmap='gray')
一般来说,神经网络的底层主要是作为边缘检测器,当层数变深时,过滤器能够捕捉更加抽象的概念,比如人脸等。
为了理解神经网络的工作过程,可以在输入图像上应用过滤器,然后绘制其卷积后的输出,这使得我们能够理解一个过滤器其特定的激活模式是什么。比如,下图是一个人脸过滤器,当输入图像是人脸图像时候,它就会被激活。
from vis.visualization import visualize_activation
from vis.utils import utils
from keras import activations
from matplotlib import pyplot as plt
%matplotlib inline
plt.rcParams['figure.figsize'] = (18, 6)
# Utility to search for layer index by name.
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'preds')
# Swap softmax with linear
model.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)
# This is the output node we want to maximize.filter_idx = 0
img = visualize_activation(model, layer_idx, filter_indices=filter_idx)
plt.imshow(img[..., 0])
同理,可以将这个想法应用于所有的类别,并检查它们的模式会是什么样子。
for output_idx in np.arange(10):
# Lets turn off verbose output this time to avoid clutter and just see the output.
img = visualize_activation(model, layer_idx, filter_indices=output_idx, input_range=(0., 1.))
plt.figure()
plt.title('Networks perception of {}'.format(output_idx))
plt.imshow(img[..., 0])
在图像分类问题中,可能会遇到目标物体被遮挡,有时候只有物体的一小部分可见的情况。基于图像遮挡的方法是通过一个灰色正方形系统地输入图像的不同部分并监视分类器的输出。这些例子清楚地表明模型在场景中定位对象时,若对象被遮挡,其分类正确的概率显着降低。
为了理解这一概念,可以从数据集中随机抽取图像,并尝试绘制该图的热图(heatmap)。这使得我们直观地了解图像的哪些部分对于该模型而言的重要性,以便对实际类别进行明确的区分。
def iter_occlusion(image, size=8):
# taken from https://www.kaggle.com/blargl/simple-occlusion-and-saliency-maps
occlusion = np.full((size * 5, size * 5, 1), [0.5], np.float32)
occlusion_center = np.full((size, size, 1), [0.5], np.float32)
occlusion_padding = size * 2
# print('padding...')
image_padded = np.pad(image, ( \ (occlusion_padding, occlusion_padding), (occlusion_padding, occlusion_padding), (0, 0) \ ), 'constant', constant_values = 0.0)
for y in range(occlusion_padding, image.shape[0] + occlusion_padding, size):
for x in range(occlusion_padding, image.shape[1] + occlusion_padding, size):
tmp = image_padded.()
tmp[y - occlusion_padding:y + occlusion_center.shape[0] + occlusion_padding, \
x - occlusion_padding:x + occlusion_center.shape[1] + occlusion_padding] \ = occlusion
tmp[y:y + occlusion_center.shape[0], x:x + occlusion_center.shape[1]] = occlusion_center yield x - occlusion_padding, y - occlusion_padding, \
tmp[occlusion_padding:tmp.shape[0] - occlusion_padding, occlusion_padding:tmp.shape[1] - occlusion_padding]i = 23 # for exampledata = val_x[i]correct_class = np.argmax(val_y[i])
# input tensor for model.predictinp = data.reshape(1, 28, 28, 1)# image data for matplotlib's imshowimg = data.reshape(28, 28)
# occlusionimg_size = img.shape[0]
occlusion_size = 4print('occluding...')heatmap = np.zeros((img_size, img_size), np.float32)class_pixels = np.zeros((img_size, img_size), np.int16)
from collections import defaultdict
counters = defaultdict(int)for n, (x, y, img_float) in enumerate(iter_occlusion(data, size=occlusion_size)):
X = img_float.reshape(1, 28, 28, 1)
out = model.predict(X)
#print('#{}: {} @ {} (correct class: {})'.format(n, np.argmax(out), np.amax(out), out[0][correct_class]))
#print('x {} - {} | y {} - {}'.format(x, x + occlusion_size, y, y + occlusion_size))
heatmap[y:y + occlusion_size, x:x + occlusion_size] = out[0][correct_class]
class_pixels[y:y + occlusion_size, x:x + occlusion_size] = np.argmax(out)
counters[np.argmax(out)] += 1
正如之前的坦克案例中看到的那样,怎么才能知道模型侧重于哪部分的预测呢?为此,可以使用显着图解决这个问题。显着图首先在这篇文章中被介绍。
使用显着图的概念相当直接——计算输出类别相对于输入图像的梯度。这应该告诉我们输出类别值对于输入图像像素中的微小变化是怎样变化的。梯度中的所有正值告诉我们,像素的一个小变化会增加输出值。因此,将这些梯度可视化可以提供一些直观的信息,这种方法突出了对输出贡献最大的显着图像区域。
class_idx = 0indices = np.where(val_y[:, class_idx] == 1.)[0]
# pick some random input from here.idx = indices[0]
# Lets sanity check the picked image.from matplotlib import pyplot as plt%matplotlib inline
plt.rcParams['figure.figsize'] = (18, 6)plt.imshow(val_x[idx][..., 0])
from vis.visualization import visualize_saliency
from vis.utils import utilsfrom keras import activations# Utility to search for layer index by name.
# Alternatively we can specify this as -1 since it corresponds to the last layer.
layer_idx = utils.find_layer_idx(model, 'preds')
# Swap softmax with linearmodel.layers[layer_idx].activation = activations.linear
model = utils.apply_modifications(model)grads = visualize_saliency(model, layer_idx, filter_indices=class_idx, seed_input=val_x[idx])
# Plot with 'jet' colormap to visualize as a heatmap.plt.imshow(grads, cmap='jet')
# This corresponds to the Dense linear layer.for class_idx in np.arange(10):
indices = np.where(val_y[:, class_idx] == 1.)[0]
idx = indices[0]
f, ax = plt.subplots(1, 4)
ax[0].imshow(val_x[idx][..., 0])
for i, modifier in enumerate([None, 'guided', 'relu']):
grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,
seed_input=val_x[idx], backprop_modifier=modifier)
if modifier is None:
modifier = 'vanilla'
ax[i+1].set_title(modifier)
ax[i+1].imshow(grads, cmap='jet')
类别激活映射(CAM)或grad-CAM是另外一种可视化模型的方法,这种方法使用的不是梯度的输出值,而是使用倒数第二个卷积层的输出,这样做是为了利用存储在倒数第二层的空间信息。
from vis.visualization import visualize_cam
# This corresponds to the Dense linear layer.for class_idx in np.arange(10):
indices = np.where(val_y[:, class_idx] == 1.)[0]
idx = indices[0]f, ax = plt.subplots(1, 4)
ax[0].imshow(val_x[idx][..., 0])
for i, modifier in enumerate([None, 'guided', 'relu']):
grads = visualize_cam(model, layer_idx, filter_indices=class_idx,
seed_input=val_x[idx], backprop_modifier=modifier)
if modifier is None:
modifier = 'vanilla'
ax[i+1].set_title(modifier)
ax[i+1].imshow(grads, cmap='jet')
本文简单说明了CNN模型可视化的重要性,以及介绍了一些可视化CNN网络模型的方法,希望对读者有所帮助,使其能够在后续深度学习应用中构建更好的模型。 免费视频教程:www.mlxs.top
D. 代码详解:用Python构建RNN模拟人脑
(number_of_records x length_of_sequence x types_of_sequences)
(number_of_records x types_of_sequences) #where types_of_sequences is 1
%pylab inline
import math
sin_wave = np.array([math.sin(x) for x in np.arange(200)])
plt.plot(sin_wave[:50])
X = []
Y = []
seq_len = 50
num_records = len(sin_wave) - seq_len
for i in range(num_records - 50):
X.append(sin_wave[i:i+seq_len])
Y.append(sin_wave[i+seq_len])
X = np.array(X)
X = np.expand_dims(X, axis=2)
Y = np.array(Y)
Y = np.expand_dims(Y, axis=1)
X.shape, Y.shape
((100, 50, 1), (100, 1))
X_val = []
Y_val = []
for i in range(num_records - 50, num_records):
X_val.append(sin_wave[i:i+seq_len])
Y_val.append(sin_wave[i+seq_len])
X_val = np.array(X_val)
X_val = np.expand_dims(X_val, axis=2)
Y_val = np.array(Y_val)
Y_val = np.expand_dims(Y_val, axis=1)
learning_rate = 0.0001
nepoch = 25
T = 50 # length of sequence
hidden_dim = 100
output_dim = 1
bptt_truncate = 5
min_clip_value = -10
max_clip_value = 10
U = np.random.uniform(0, 1, (hidden_dim, T))
W = np.random.uniform(0, 1, (hidden_dim, hidden_dim))
V = np.random.uniform(0, 1, (output_dim, hidden_dim))
def sigmoid(x):
return 1 / (1 + np.exp(-x))
for epoch in range(nepoch):
# check loss on train
loss = 0.0
# do a forward pass to get prediction
for i in range(Y.shape[0]):
x, y = X[i], Y[i] # get input, output values of each record
prev_s = np.zeros((hidden_dim, 1)) # here, prev-s is the value of the previous activation of hidden layer; which is initialized as all zeroes
for t in range(T):
new_input = np.zeros(x.shape) # we then do a forward pass for every timestep in the sequence
new_input[t] = x[t] # for this, we define a single input for that timestep
mulu = np.dot(U, new_input)
mulw = np.dot(W, prev_s)
add = mulw + mulu
s = sigmoid(add)
mulv = np.dot(V, s)
prev_s = s
# calculate error
loss_per_record = (y - mulv)**2 / 2
loss += loss_per_record
loss = loss / float(y.shape[0])
# check loss on val
val_loss = 0.0
for i in range(Y_val.shape[0]):
x, y = X_val[i], Y_val[i]
prev_s = np.zeros((hidden_dim, 1))
for t in range(T):
new_input = np.zeros(x.shape)
new_input[t] = x[t]
mulu = np.dot(U, new_input)
mulw = np.dot(W, prev_s)
add = mulw + mulu
s = sigmoid(add)
mulv = np.dot(V, s)
prev_s = s
loss_per_record = (y - mulv)**2 / 2
val_loss += loss_per_record
val_loss = val_loss / float(y.shape[0])
print('Epoch: ', epoch + 1, ', Loss: ', loss, ', Val Loss: ', val_loss)
Epoch: 1 , Loss: [[101185.61756671]] , Val Loss: [[50591.0340148]]
...
...
# train model
for i in range(Y.shape[0]):
x, y = X[i], Y[i]
layers = []
prev_s = np.zeros((hidden_dim, 1))
dU = np.zeros(U.shape)
dV = np.zeros(V.shape)
dW = np.zeros(W.shape)
dU_t = np.zeros(U.shape)
dV_t = np.zeros(V.shape)
dW_t = np.zeros(W.shape)
dU_i = np.zeros(U.shape)
dW_i = np.zeros(W.shape)
# forward pass
for t in range(T):
new_input = np.zeros(x.shape)
new_input[t] = x[t]
mulu = np.dot(U, new_input)
mulw = np.dot(W, prev_s)
add = mulw + mulu
s = sigmoid(add)
mulv = np.dot(V, s)
layers.append({'s':s, 'prev_s':prev_s})
prev_s = s
# derivative of pred
dmulv = (mulv - y)
# backward pass
for t in range(T):
dV_t = np.dot(dmulv, np.transpose(layers[t]['s']))
dsv = np.dot(np.transpose(V), dmulv)
ds = dsv
dadd = add * (1 - add) * ds
dmulw = dadd * np.ones_like(mulw)
dprev_s = np.dot(np.transpose(W), dmulw)
for i in range(t-1, max(-1, t-bptt_truncate-1), -1):
ds = dsv + dprev_s
dadd = add * (1 - add) * ds
dmulw = dadd * np.ones_like(mulw)
dmulu = dadd * np.ones_like(mulu)
dW_i = np.dot(W, layers[t]['prev_s'])
dprev_s = np.dot(np.transpose(W), dmulw)
new_input = np.zeros(x.shape)
new_input[t] = x[t]
dU_i = np.dot(U, new_input)
dx = np.dot(np.transpose(U), dmulu)
dU_t += dU_i
dW_t += dW_i
dV += dV_t
dU += dU_t
dW += dW_t
if dU.max() > max_clip_value:
dU[dU > max_clip_value] = max_clip_value
if dV.max() > max_clip_value:
dV[dV > max_clip_value] = max_clip_value
if dW.max() > max_clip_value:
dW[dW > max_clip_value] = max_clip_value
if dU.min() < min_clip_value:
dU[dU < min_clip_value] = min_clip_value
if dV.min() < min_clip_value:
dV[dV < min_clip_value] = min_clip_value
if dW.min() < min_clip_value:
dW[dW < min_clip_value] = min_clip_value
# update
U -= learning_rate * dU
V -= learning_rate * dV
W -= learning_rate * dW
Epoch: 1 , Loss: [[101185.61756671]] , Val Loss: [[50591.0340148]]
Epoch: 2 , Loss: [[61205.46869629]] , Val Loss: [[30601.34535365]]
Epoch: 3 , Loss: [[31225.3198258]] , Val Loss: [[15611.65669247]]
Epoch: 4 , Loss: [[11245.17049551]] , Val Loss: [[5621.96780111]]
Epoch: 5 , Loss: [[1264.5157739]] , Val Loss: [[632.02563908]]
Epoch: 6 , Loss: [[20.15654115]] , Val Loss: [[10.05477285]]
Epoch: 7 , Loss: [[17.13622839]] , Val Loss: [[8.55190426]]
Epoch: 8 , Loss: [[17.38870495]] , Val Loss: [[8.68196484]]
Epoch: 9 , Loss: [[17.181681]] , Val Loss: [[8.57837827]]
Epoch: 10 , Loss: [[17.31275313]] , Val Loss: [[8.64199652]]
Epoch: 11 , Loss: [[17.12960034]] , Val Loss: [[8.54768294]]
Epoch: 12 , Loss: [[17.09020065]] , Val Loss: [[8.52993502]]
Epoch: 13 , Loss: [[17.17370113]] , Val Loss: [[8.57517454]]
Epoch: 14 , Loss: [[17.04906914]] , Val Loss: [[8.50658127]]
Epoch: 15 , Loss: [[16.96420184]] , Val Loss: [[8.46794248]]
Epoch: 16 , Loss: [[17.017519]] , Val Loss: [[8.49241316]]
Epoch: 17 , Loss: [[16.94199493]] , Val Loss: [[8.45748739]]
Epoch: 18 , Loss: [[16.99796892]] , Val Loss: [[8.48242177]]
Epoch: 19 , Loss: [[17.24817035]] , Val Loss: [[8.6126231]]
Epoch: 20 , Loss: [[17.00844599]] , Val Loss: [[8.48682234]]
Epoch: 21 , Loss: [[17.03943262]] , Val Loss: [[8.50437328]]
Epoch: 22 , Loss: [[17.01417255]] , Val Loss: [[8.49409597]]
Epoch: 23 , Loss: [[17.20918888]] , Val Loss: [[8.5854792]]
Epoch: 24 , Loss: [[16.92068017]] , Val Loss: [[8.44794633]]
Epoch: 25 , Loss: [[16.76856238]] , Val Loss: [[8.37295808]]
preds = []
for i in range(Y.shape[0]):
x, y = X[i], Y[i]
prev_s = np.zeros((hidden_dim, 1))
# Forward pass
for t in range(T):
mulu = np.dot(U, x)
mulw = np.dot(W, prev_s)
add = mulw + mulu
s = sigmoid(add)
mulv = np.dot(V, s)
prev_s = s
preds.append(mulv)
preds = np.array(preds)
plt.plot(preds[:, 0, 0], 'g')
plt.plot(Y[:, 0], 'r')
plt.show()
preds = []
for i in range(Y_val.shape[0]):
x, y = X_val[i], Y_val[i]
prev_s = np.zeros((hidden_dim, 1))
# For each time step...
for t in range(T):
mulu = np.dot(U, x)
mulw = np.dot(W, prev_s)
add = mulw + mulu
s = sigmoid(add)
mulv = np.dot(V, s)
prev_s = s
preds.append(mulv)
preds = np.array(preds)
plt.plot(preds[:, 0, 0], 'g')
plt.plot(Y_val[:, 0], 'r')
plt.show()
from sklearn.metrics import mean_squared_error
math.sqrt(mean_squared_error(Y_val[:, 0] * max_val, preds[:, 0, 0] * max_val))
0.127191931509431