當前位置:首頁 » 編程語言 » pythoninline

pythoninline

發布時間: 2023-08-30 18:01:29

A. python matlibplot 怎樣畫圖例

用於添加圖例的函數是plt.legend(),我們通過例子來對其進行介紹。

%matplotlib inline
import matplotlib.pyplot as plt
import matplotlib as mpl
import numpy as np

#多數據並列柱狀圖
mpl.rcParams["font.sans-serif"]=["SimHei"]
mpl.rcParams["axes.unicode_minus"]=False
x = np.arange(6)
y1 = [23,5,14,27,18,14]
y2 = [10,27,25,18,23,16]
tick_label = ["A","B","C","D","E","F"]
bar_width = 0.35
plt.bar(x,y1,bar_width,align="center",label="班級A",alpha=0.5)
plt.bar(x+bar_width,y2,bar_width,align="center",label="班級B",alpha=0.5)
plt.xlabel("成績等級")
plt.ylabel("人數")
plt.xticks(x+bar_width/2,tick_label)

plt.legend(bbox_to_anchor=(1,1),#圖例邊界框起始位置
loc="upper right",#圖例的位置
ncol=1,#列數
mode="None",#當值設置為「expend」時,圖例會水平擴展至整個坐標軸區域
borderaxespad=0,#坐標軸和圖例邊界之間的間距
title="班級",#圖例標題
shadow=False,#是否為線框添加陰影
fancybox=True)#線框圓角處理參數
plt.show()

效果如圖所示

B. Python 判斷列表內的元素是否在同一直線上的方法

講一下思路
首先確定輸入的值在列表中的位置,用一個元組來表示(1,2)
用一個列表保存在一條直線上的元組位置組合
inLine = [set([(0,0),(0,1),(0,2)]) ,set([(1,0),(1,1),(1,2)])]
如果用戶輸入3個值得位置在inLine列表中,則可以連成一條直線
使用set的好處在於無需考慮3個值得順序

C. 利用Python實現卷積神經網路的可視化

在本文中,將探討如何可視化卷積神經網路(CNN),該網路在計算機視覺中使用最為廣泛。首先了解CNN模型可視化的重要性,其次介紹可視化的幾種方法,同時以一個用例幫助讀者更好地理解模型可視化這一概念。

正如上文中介紹的癌症腫瘤診斷案例所看到的,研究人員需要對所設計模型的工作原理及其功能掌握清楚,這點至關重要。一般而言,一名深度學習研究者應該記住以下幾點:

1.1 理解模型是如何工作的

1.2 調整模型的參數

1.3 找出模型失敗的原因

1.4 向消費者/終端用戶或業務主管解釋模型做出的決定

2.可視化CNN模型的方法

根據其內部的工作原理,大體上可以將CNN可視化方法分為以下三類:

初步方法:一種顯示訓練模型整體結構的簡單方法

基於激活的方法:對單個或一組神經元的激活狀態進行破譯以了解其工作過程

基於梯度的方法:在訓練過程中操作前向傳播和後向傳播形成的梯度

下面將具體介紹以上三種方法,所舉例子是使用Keras深度學習庫實現,另外本文使用的數據集是由「識別數字」競賽提供。因此,讀者想復現文中案例時,請確保安裝好Kears以及執行了這些步驟。

研究者能做的最簡單的事情就是繪制出模型結構圖,此外還可以標注神經網路中每層的形狀及參數。在keras中,可以使用如下命令完成模型結構圖的繪制:

model.summary()_________________________________________________________________Layer (type)              滑稿宏   Output Shape              Param #  

=================================================================conv2d_1 (Conv2D)            (None, 26, 26, 32)        320_________________________________________________________________conv2d_2 (Conv2D)    敬茄        (None, 24, 24, 64)        18496_________________________________________________________________max_pooling2d_1 (MaxPooling2 (None, 12, 12, 64)        0_________________________________________________________________dropout_1 (Dropout)          (None, 12, 12, 64)        0_________________________________________________________________flatten_1 (Flatten)          (None, 9216)              0_________________________________________________________________dense_1 (Dense)              (None, 128)               1179776_________________________________________________________________dropout_2 (Dropout)          (None, 128)               0_________________________________________________________________preds (Dense)                (None, 10)        信冊        1290      

=================================================================Total params: 1,199,882Trainable params: 1,199,882Non-trainable params: 0

還可以用一個更富有創造力和表現力的方式呈現模型結構框圖,可以使用keras.utils.vis_utils函數完成模型體系結構圖的繪制。

另一種方法是繪制訓練模型的過濾器,這樣就可以了解這些過濾器的表現形式。例如,第一層的第一個過濾器看起來像:

top_layer = model.layers[0]plt.imshow(top_layer.get_weights()[0][:, :, :, 0].squeeze(), cmap='gray')

一般來說,神經網路的底層主要是作為邊緣檢測器,當層數變深時,過濾器能夠捕捉更加抽象的概念,比如人臉等。

為了理解神經網路的工作過程,可以在輸入圖像上應用過濾器,然後繪制其卷積後的輸出,這使得我們能夠理解一個過濾器其特定的激活模式是什麼。比如,下圖是一個人臉過濾器,當輸入圖像是人臉圖像時候,它就會被激活。

from vis.visualization import visualize_activation

from vis.utils import utils

from keras import activations

from matplotlib import pyplot as plt

%matplotlib inline

plt.rcParams['figure.figsize'] = (18, 6)

# Utility to search for layer index by name.

# Alternatively we can specify this as -1 since it corresponds to the last layer.

layer_idx = utils.find_layer_idx(model, 'preds')

# Swap softmax with linear

model.layers[layer_idx].activation = activations.linear

model = utils.apply_modifications(model)

# This is the output node we want to maximize.filter_idx = 0

img = visualize_activation(model, layer_idx, filter_indices=filter_idx)

plt.imshow(img[..., 0])

同理,可以將這個想法應用於所有的類別,並檢查它們的模式會是什麼樣子。

for output_idx in np.arange(10):

  # Lets turn off verbose output this time to avoid clutter and just see the output.

  img = visualize_activation(model, layer_idx, filter_indices=output_idx, input_range=(0., 1.))

  plt.figure()

  plt.title('Networks perception of {}'.format(output_idx))

  plt.imshow(img[..., 0])

在圖像分類問題中,可能會遇到目標物體被遮擋,有時候只有物體的一小部分可見的情況。基於圖像遮擋的方法是通過一個灰色正方形系統地輸入圖像的不同部分並監視分類器的輸出。這些例子清楚地表明模型在場景中定位對象時,若對象被遮擋,其分類正確的概率顯著降低。

為了理解這一概念,可以從數據集中隨機抽取圖像,並嘗試繪制該圖的熱圖(heatmap)。這使得我們直觀地了解圖像的哪些部分對於該模型而言的重要性,以便對實際類別進行明確的區分。

def iter_occlusion(image, size=8):

    # taken from https://www.kaggle.com/blargl/simple-occlusion-and-saliency-maps

  occlusion = np.full((size * 5, size * 5, 1), [0.5], np.float32)

  occlusion_center = np.full((size, size, 1), [0.5], np.float32)

  occlusion_padding = size * 2

  # print('padding...')

  image_padded = np.pad(image, ( \  (occlusion_padding, occlusion_padding), (occlusion_padding, occlusion_padding), (0, 0) \  ), 'constant', constant_values = 0.0)

  for y in range(occlusion_padding, image.shape[0] + occlusion_padding, size):

      for x in range(occlusion_padding, image.shape[1] + occlusion_padding, size):

          tmp = image_padded.()

          tmp[y - occlusion_padding:y + occlusion_center.shape[0] + occlusion_padding, \

            x - occlusion_padding:x + occlusion_center.shape[1] + occlusion_padding] \            = occlusion

          tmp[y:y + occlusion_center.shape[0], x:x + occlusion_center.shape[1]] = occlusion_center          yield x - occlusion_padding, y - occlusion_padding, \

            tmp[occlusion_padding:tmp.shape[0] - occlusion_padding, occlusion_padding:tmp.shape[1] - occlusion_padding]i = 23 # for exampledata = val_x[i]correct_class = np.argmax(val_y[i])

# input tensor for model.predictinp = data.reshape(1, 28, 28, 1)# image data for matplotlib's imshowimg = data.reshape(28, 28)

# occlusionimg_size = img.shape[0]

occlusion_size = 4print('occluding...')heatmap = np.zeros((img_size, img_size), np.float32)class_pixels = np.zeros((img_size, img_size), np.int16)

from collections import defaultdict

counters = defaultdict(int)for n, (x, y, img_float) in enumerate(iter_occlusion(data, size=occlusion_size)):

    X = img_float.reshape(1, 28, 28, 1)

    out = model.predict(X)

    #print('#{}: {} @ {} (correct class: {})'.format(n, np.argmax(out), np.amax(out), out[0][correct_class]))

    #print('x {} - {} | y {} - {}'.format(x, x + occlusion_size, y, y + occlusion_size))

    heatmap[y:y + occlusion_size, x:x + occlusion_size] = out[0][correct_class]

    class_pixels[y:y + occlusion_size, x:x + occlusion_size] = np.argmax(out)

    counters[np.argmax(out)] += 1

正如之前的坦克案例中看到的那樣,怎麼才能知道模型側重於哪部分的預測呢?為此,可以使用顯著圖解決這個問題。顯著圖首先在這篇文章中被介紹。

使用顯著圖的概念相當直接——計算輸出類別相對於輸入圖像的梯度。這應該告訴我們輸出類別值對於輸入圖像像素中的微小變化是怎樣變化的。梯度中的所有正值告訴我們,像素的一個小變化會增加輸出值。因此,將這些梯度可視化可以提供一些直觀的信息,這種方法突出了對輸出貢獻最大的顯著圖像區域。

class_idx = 0indices = np.where(val_y[:, class_idx] == 1.)[0]

# pick some random input from here.idx = indices[0]

# Lets sanity check the picked image.from matplotlib import pyplot as plt%matplotlib inline

plt.rcParams['figure.figsize'] = (18, 6)plt.imshow(val_x[idx][..., 0])

from vis.visualization import visualize_saliency

from vis.utils import utilsfrom keras import activations# Utility to search for layer index by name.

# Alternatively we can specify this as -1 since it corresponds to the last layer.

layer_idx = utils.find_layer_idx(model, 'preds')

# Swap softmax with linearmodel.layers[layer_idx].activation = activations.linear

model = utils.apply_modifications(model)grads = visualize_saliency(model, layer_idx, filter_indices=class_idx, seed_input=val_x[idx])

# Plot with 'jet' colormap to visualize as a heatmap.plt.imshow(grads, cmap='jet')

# This corresponds to the Dense linear layer.for class_idx in np.arange(10):

    indices = np.where(val_y[:, class_idx] == 1.)[0]

    idx = indices[0]

    f, ax = plt.subplots(1, 4)

    ax[0].imshow(val_x[idx][..., 0])

    for i, modifier in enumerate([None, 'guided', 'relu']):

        grads = visualize_saliency(model, layer_idx, filter_indices=class_idx,

        seed_input=val_x[idx], backprop_modifier=modifier)

        if modifier is None:

            modifier = 'vanilla'

        ax[i+1].set_title(modifier)

        ax[i+1].imshow(grads, cmap='jet')

類別激活映射(CAM)或grad-CAM是另外一種可視化模型的方法,這種方法使用的不是梯度的輸出值,而是使用倒數第二個卷積層的輸出,這樣做是為了利用存儲在倒數第二層的空間信息。

from vis.visualization import visualize_cam

# This corresponds to the Dense linear layer.for class_idx in np.arange(10):

indices = np.where(val_y[:, class_idx] == 1.)[0]

idx = indices[0]f, ax = plt.subplots(1, 4)

ax[0].imshow(val_x[idx][..., 0])

for i, modifier in enumerate([None, 'guided', 'relu']):

    grads = visualize_cam(model, layer_idx, filter_indices=class_idx,

    seed_input=val_x[idx], backprop_modifier=modifier)

    if modifier is None:

        modifier = 'vanilla'

    ax[i+1].set_title(modifier)

    ax[i+1].imshow(grads, cmap='jet')

本文簡單說明了CNN模型可視化的重要性,以及介紹了一些可視化CNN網路模型的方法,希望對讀者有所幫助,使其能夠在後續深度學習應用中構建更好的模型。 免費視頻教程:www.mlxs.top

D. 代碼詳解:用Python構建RNN模擬人腦

(number_of_records x length_of_sequence x types_of_sequences)

(number_of_records x types_of_sequences) #where types_of_sequences is 1

%pylab inline

import math

sin_wave = np.array([math.sin(x) for x in np.arange(200)])

plt.plot(sin_wave[:50])

X = []

Y = []

seq_len = 50

num_records = len(sin_wave) - seq_len

for i in range(num_records - 50):

X.append(sin_wave[i:i+seq_len])

Y.append(sin_wave[i+seq_len])

X = np.array(X)

X = np.expand_dims(X, axis=2)

Y = np.array(Y)

Y = np.expand_dims(Y, axis=1)

X.shape, Y.shape

((100, 50, 1), (100, 1))

X_val = []

Y_val = []

for i in range(num_records - 50, num_records):

X_val.append(sin_wave[i:i+seq_len])

Y_val.append(sin_wave[i+seq_len])

X_val = np.array(X_val)

X_val = np.expand_dims(X_val, axis=2)

Y_val = np.array(Y_val)

Y_val = np.expand_dims(Y_val, axis=1)

learning_rate = 0.0001

nepoch = 25

T = 50 # length of sequence

hidden_dim = 100

output_dim = 1

bptt_truncate = 5

min_clip_value = -10

max_clip_value = 10

U = np.random.uniform(0, 1, (hidden_dim, T))

W = np.random.uniform(0, 1, (hidden_dim, hidden_dim))

V = np.random.uniform(0, 1, (output_dim, hidden_dim))

def sigmoid(x):

return 1 / (1 + np.exp(-x))

for epoch in range(nepoch):

# check loss on train

loss = 0.0

# do a forward pass to get prediction

for i in range(Y.shape[0]):

x, y = X[i], Y[i] # get input, output values of each record

prev_s = np.zeros((hidden_dim, 1)) # here, prev-s is the value of the previous activation of hidden layer; which is initialized as all zeroes

for t in range(T):

new_input = np.zeros(x.shape) # we then do a forward pass for every timestep in the sequence

new_input[t] = x[t] # for this, we define a single input for that timestep

mulu = np.dot(U, new_input)

mulw = np.dot(W, prev_s)

add = mulw + mulu

s = sigmoid(add)

mulv = np.dot(V, s)

prev_s = s

# calculate error

loss_per_record = (y - mulv)**2 / 2

loss += loss_per_record

loss = loss / float(y.shape[0])

# check loss on val

val_loss = 0.0

for i in range(Y_val.shape[0]):

x, y = X_val[i], Y_val[i]

prev_s = np.zeros((hidden_dim, 1))

for t in range(T):

new_input = np.zeros(x.shape)

new_input[t] = x[t]

mulu = np.dot(U, new_input)

mulw = np.dot(W, prev_s)

add = mulw + mulu

s = sigmoid(add)

mulv = np.dot(V, s)

prev_s = s

loss_per_record = (y - mulv)**2 / 2

val_loss += loss_per_record

val_loss = val_loss / float(y.shape[0])

print('Epoch: ', epoch + 1, ', Loss: ', loss, ', Val Loss: ', val_loss)

Epoch: 1 , Loss: [[101185.61756671]] , Val Loss: [[50591.0340148]]

...

...

# train model

for i in range(Y.shape[0]):

x, y = X[i], Y[i]

layers = []

prev_s = np.zeros((hidden_dim, 1))

dU = np.zeros(U.shape)

dV = np.zeros(V.shape)

dW = np.zeros(W.shape)

dU_t = np.zeros(U.shape)

dV_t = np.zeros(V.shape)

dW_t = np.zeros(W.shape)

dU_i = np.zeros(U.shape)

dW_i = np.zeros(W.shape)

# forward pass

for t in range(T):

new_input = np.zeros(x.shape)

new_input[t] = x[t]

mulu = np.dot(U, new_input)

mulw = np.dot(W, prev_s)

add = mulw + mulu

s = sigmoid(add)

mulv = np.dot(V, s)

layers.append({'s':s, 'prev_s':prev_s})

prev_s = s

# derivative of pred

dmulv = (mulv - y)

# backward pass

for t in range(T):

dV_t = np.dot(dmulv, np.transpose(layers[t]['s']))

dsv = np.dot(np.transpose(V), dmulv)

ds = dsv

dadd = add * (1 - add) * ds

dmulw = dadd * np.ones_like(mulw)

dprev_s = np.dot(np.transpose(W), dmulw)

for i in range(t-1, max(-1, t-bptt_truncate-1), -1):

ds = dsv + dprev_s

dadd = add * (1 - add) * ds

dmulw = dadd * np.ones_like(mulw)

dmulu = dadd * np.ones_like(mulu)

dW_i = np.dot(W, layers[t]['prev_s'])

dprev_s = np.dot(np.transpose(W), dmulw)

new_input = np.zeros(x.shape)

new_input[t] = x[t]

dU_i = np.dot(U, new_input)

dx = np.dot(np.transpose(U), dmulu)

dU_t += dU_i

dW_t += dW_i

dV += dV_t

dU += dU_t

dW += dW_t

if dU.max() > max_clip_value:

dU[dU > max_clip_value] = max_clip_value

if dV.max() > max_clip_value:

dV[dV > max_clip_value] = max_clip_value

if dW.max() > max_clip_value:

dW[dW > max_clip_value] = max_clip_value

if dU.min() < min_clip_value:

dU[dU < min_clip_value] = min_clip_value

if dV.min() < min_clip_value:

dV[dV < min_clip_value] = min_clip_value

if dW.min() < min_clip_value:

dW[dW < min_clip_value] = min_clip_value

# update

U -= learning_rate * dU

V -= learning_rate * dV

W -= learning_rate * dW

Epoch: 1 , Loss: [[101185.61756671]] , Val Loss: [[50591.0340148]]

Epoch: 2 , Loss: [[61205.46869629]] , Val Loss: [[30601.34535365]]

Epoch: 3 , Loss: [[31225.3198258]] , Val Loss: [[15611.65669247]]

Epoch: 4 , Loss: [[11245.17049551]] , Val Loss: [[5621.96780111]]

Epoch: 5 , Loss: [[1264.5157739]] , Val Loss: [[632.02563908]]

Epoch: 6 , Loss: [[20.15654115]] , Val Loss: [[10.05477285]]

Epoch: 7 , Loss: [[17.13622839]] , Val Loss: [[8.55190426]]

Epoch: 8 , Loss: [[17.38870495]] , Val Loss: [[8.68196484]]

Epoch: 9 , Loss: [[17.181681]] , Val Loss: [[8.57837827]]

Epoch: 10 , Loss: [[17.31275313]] , Val Loss: [[8.64199652]]

Epoch: 11 , Loss: [[17.12960034]] , Val Loss: [[8.54768294]]

Epoch: 12 , Loss: [[17.09020065]] , Val Loss: [[8.52993502]]

Epoch: 13 , Loss: [[17.17370113]] , Val Loss: [[8.57517454]]

Epoch: 14 , Loss: [[17.04906914]] , Val Loss: [[8.50658127]]

Epoch: 15 , Loss: [[16.96420184]] , Val Loss: [[8.46794248]]

Epoch: 16 , Loss: [[17.017519]] , Val Loss: [[8.49241316]]

Epoch: 17 , Loss: [[16.94199493]] , Val Loss: [[8.45748739]]

Epoch: 18 , Loss: [[16.99796892]] , Val Loss: [[8.48242177]]

Epoch: 19 , Loss: [[17.24817035]] , Val Loss: [[8.6126231]]

Epoch: 20 , Loss: [[17.00844599]] , Val Loss: [[8.48682234]]

Epoch: 21 , Loss: [[17.03943262]] , Val Loss: [[8.50437328]]

Epoch: 22 , Loss: [[17.01417255]] , Val Loss: [[8.49409597]]

Epoch: 23 , Loss: [[17.20918888]] , Val Loss: [[8.5854792]]

Epoch: 24 , Loss: [[16.92068017]] , Val Loss: [[8.44794633]]

Epoch: 25 , Loss: [[16.76856238]] , Val Loss: [[8.37295808]]

preds = []

for i in range(Y.shape[0]):

x, y = X[i], Y[i]

prev_s = np.zeros((hidden_dim, 1))

# Forward pass

for t in range(T):

mulu = np.dot(U, x)

mulw = np.dot(W, prev_s)

add = mulw + mulu

s = sigmoid(add)

mulv = np.dot(V, s)

prev_s = s

preds.append(mulv)

preds = np.array(preds)

plt.plot(preds[:, 0, 0], 'g')

plt.plot(Y[:, 0], 'r')

plt.show()

preds = []

for i in range(Y_val.shape[0]):

x, y = X_val[i], Y_val[i]

prev_s = np.zeros((hidden_dim, 1))

# For each time step...

for t in range(T):

mulu = np.dot(U, x)

mulw = np.dot(W, prev_s)

add = mulw + mulu

s = sigmoid(add)

mulv = np.dot(V, s)

prev_s = s

preds.append(mulv)

preds = np.array(preds)

plt.plot(preds[:, 0, 0], 'g')

plt.plot(Y_val[:, 0], 'r')

plt.show()

from sklearn.metrics import mean_squared_error

math.sqrt(mean_squared_error(Y_val[:, 0] * max_val, preds[:, 0, 0] * max_val))

0.127191931509431

熱點內容
外賣盒子如何設置密碼 發布:2025-02-04 05:49:33 瀏覽:504
國產安卓編程軟體哪個最好 發布:2025-02-04 05:49:25 瀏覽:387
什麼是身份證密碼 發布:2025-02-04 05:43:41 瀏覽:784
雲伺服器江蘇 發布:2025-02-04 05:38:46 瀏覽:237
演算法及vb 發布:2025-02-04 05:33:37 瀏覽:102
安卓手機怎麼自檢電池 發布:2025-02-04 05:31:31 瀏覽:410
兩種存儲 發布:2025-02-04 05:26:43 瀏覽:203
手機php源碼 發布:2025-02-04 05:08:22 瀏覽:548
全戰帝國與拿戰哪個配置高 發布:2025-02-04 04:59:39 瀏覽:754
海控聯盟怎麼下載安卓版 發布:2025-02-04 04:55:52 瀏覽:768