<h1><strong id="docs-internal-guid-2e291562-7fff-9212-ceb6-eb8f28a94823"> TensorFlow神经网络可视化每一层</strong></h1>
<p><strong>目标:通过读取训练生成的模型参数,把参数输入到网络中,可视化每一层结果。</strong></p>
<p><strong>方法:get_tensor方法得到模型中对应层的参数,将参数输入网络中。</strong></p>
<p><strong>涉及网络层: 卷积层,反卷积层, LSTM层(未找到b的初始化方法)。</strong></p>
<h3><strong>运行环境</strong></h3>
<p><strong>1.TensorFlow 1.8.0</strong></p>
<p><strong>2. python 3.6</strong></p>
<p><strong>3. matplotlib 3.0.3</strong></p>
<h3><strong>主要核心</strong></h3>
<p><strong>1.参数的获取</strong></p>
<p><strong>代码需要输入存放模型的位置和对应需要读取层参数的名称。我们可以先通过输出all_variables来查看每一层的参数名称。</strong></p>
<pre class="blockcode"><code class="language-python">def get_parameter(model_dir, key):
for root, dirs, files in os.walk(model_dir):
for file in files:
if(os.path.splitext(file)[-1].lower() == '.meta'):
ckpt = file
ckpt_path = model_dir + ckpt.split('.')[0]
reader = tf.train.NewCheckpointReader(ckpt_path)
all_variables = reader.get_variable_to_shape_map()
print(all_variables)
data = reader.get_tensor(key)
return data</code></pre>
<p><br><strong>表示表示生成网络的第五层卷积参数,卷积核大小为1x3输入输出通道16,16.在前向传播时把参数名称输入即可</strong><strong>all_variables输出如下图,可以看到generator_model/cv5/w:[1,3,16,16]</strong></p>
<p><img alt="" class="blockcode" height="22" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-9862f1d19fdfe70ac74e3f3fb411dc28.png" width="800"></p>
<p><strong>2.前向传播网络构建</strong></p>
<p><strong>网络包含卷积,反卷积,LSTM</strong></p>
<p><strong>2.1卷积层</strong></p>
<p><strong>需要输入参数为:输入、卷积核参数w、卷积核参数b、strides、padding、激活函数</strong></p>
<pre class="blockcode"><code class="language-python">def conv2d_layer(data, kenerl_w, biases, strides=[1, 1, 1, 1], padding='SAME',
activation_function_type='lrelu', keep_prob=1,
bias=True, dropout=False):
cov = tf.nn.conv2d(data, kenerl_w, strides=strides, padding=padding)
if (bias == True):
h = activation_function(cov + biases, activation_function_type)
else:
h = activation_function(cov, activation_function_type)
if (dropout == True):
out = tf.nn.dropout(h, keep_prob)
else:
out = h
return out</code></pre>
<p><strong>2.2激活函数</strong></p>
<pre class="blockcode"><code class="language-python">def activation_function(x,activation_function_type):
if(activation_function_type=='lrelu'):
h = leaky_relu(x)
if(activation_function_type=='tanh'):
h = tf.tanh(x)
if(activation_function_type=='sigmoid'):
h = tf.sigmoid(x)
if(activation_function_type=='relu'):
h = tf.nn.relu(x)
if(activation_function_type=='linear'):
h = x
if(activation_function_type=='softmax'):
h = tf.nn.softmax(x)
return h</code></pre>
<p><strong>卷积实例</strong></p>
<p><strong>通过获取卷积层参数w,b输入网络</strong></p>
<pre class="blockcode"><code class="language-python">conv2_1 = conv2d_layer(x, get_parameter(model_dir, 'generator_model/cv1/w'),
get_parameter(model_dir, 'generator_model/cv1/b'),
strides=strides, activation_function_type='lrelu', padding='SAME')</code></pre>
<p><strong>2.3 反卷积层</strong></p>
<p><strong>反卷积层参数:输入、输出网络大小、卷积核w、卷积核b、激活函数</strong></p>
<pre class="blockcode"><code class="language-python">def upconv2d_layer(data, output_shape, w_init,b_init = 0, strides=[1, 1, 1, 1], padding='SAME',
activation_function_type='lrelu',keep_prob=1,bias=False):
conv = tf.nn.conv2d_transpose(data, w_init, output_shape, strides, padding=padding)
if (bias == True):
h = activation_function(conv + b_init, activation_function_type)
else:
h = activation_function(conv, activation_function_type)
if ((keep_prob < 1) and keep_prob > 0):
out = tf.nn.dropout(h, keep_prob)
else:
out = h
return out</code></pre>
<p><strong>反卷积实例</strong></p>
<pre class="blockcode"><code class="language-python">dconv2_5 = upconv2d_layer(dconv2_6_in, output_shape =
(n_feature,n_height,9,16), w_init=get_parameter(model_dir, 'generator_model/upcv6/w')
,strides=strides,padding='VALID',activation_function_type='lrelu', bias=False)</code></pre>
<p><strong>LSTM 目前只传入W的大小,B初始化还未解决</strong><strong>2.4 LSTM</strong></p> |
|