TensorFlow神经网络可视化

论坛 期权论坛     
选择匿名的用户   2021-5-22 15:29   228   0
<h1><strong id="docs-internal-guid-2e291562-7fff-9212-ceb6-eb8f28a94823">                            TensorFlow神经网络可视化每一层</strong></h1>
<p><strong>目标:通过读取训练生成的模型参数,把参数输入到网络中,可视化每一层结果。</strong></p>
<p><strong>方法:get_tensor方法得到模型中对应层的参数,将参数输入网络中。</strong></p>
<p><strong>涉及网络层: 卷积层,反卷积层, LSTM层(未找到b的初始化方法)。</strong></p>
<h3><strong>运行环境</strong></h3>
<p><strong>1.TensorFlow 1.8.0</strong></p>
<p><strong>2. python 3.6</strong></p>
<p><strong>3. matplotlib 3.0.3</strong></p>
<h3><strong>主要核心</strong></h3>
<p><strong>1.参数的获取</strong></p>
<p><strong>代码需要输入存放模型的位置和对应需要读取层参数的名称。我们可以先通过输出all_variables来查看每一层的参数名称。</strong></p>
<pre class="blockcode"><code class="language-python">def get_parameter(model_dir, key):
   for root, dirs, files in os.walk(model_dir):
       for file in files:
           if(os.path.splitext(file)[-1].lower() &#61;&#61; &#39;.meta&#39;):
               ckpt &#61; file
   ckpt_path &#61; model_dir &#43; ckpt.split(&#39;.&#39;)[0]
   reader &#61; tf.train.NewCheckpointReader(ckpt_path)
   all_variables &#61; reader.get_variable_to_shape_map()
   print(all_variables)
   data &#61; reader.get_tensor(key)
   return data</code></pre>
<p><br><strong>表示表示生成网络的第五层卷积参数,卷积核大小为1x3输入输出通道16,16.在前向传播时把参数名称输入即可</strong><strong>all_variables输出如下图,可以看到generator_model/cv5/w:[1,3,16,16]</strong></p>
<p><img alt="" class="blockcode" height="22" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-9862f1d19fdfe70ac74e3f3fb411dc28.png" width="800"></p>
<p><strong>2.前向传播网络构建</strong></p>
<p><strong>网络包含卷积,反卷积,LSTM</strong></p>
<p><strong>2.1卷积层</strong></p>
<p><strong>需要输入参数为:输入、卷积核参数w、卷积核参数b、strides、padding、激活函数</strong></p>
<pre class="blockcode"><code class="language-python">def conv2d_layer(data, kenerl_w, biases, strides&#61;[1, 1, 1, 1], padding&#61;&#39;SAME&#39;,
                activation_function_type&#61;&#39;lrelu&#39;, keep_prob&#61;1,
                bias&#61;True, dropout&#61;False):
   cov &#61; tf.nn.conv2d(data, kenerl_w, strides&#61;strides, padding&#61;padding)
   if (bias &#61;&#61; True):
       h &#61; activation_function(cov &#43; biases, activation_function_type)
   else:
       h &#61; activation_function(cov, activation_function_type)
   if (dropout &#61;&#61; True):
       out &#61; tf.nn.dropout(h, keep_prob)
   else:
       out &#61; h
   return out</code></pre>
<p><strong>2.2激活函数</strong></p>
<pre class="blockcode"><code class="language-python">def activation_function(x,activation_function_type):
    if(activation_function_type&#61;&#61;&#39;lrelu&#39;):
        h &#61; leaky_relu(x)
    if(activation_function_type&#61;&#61;&#39;tanh&#39;):
        h &#61; tf.tanh(x)
    if(activation_function_type&#61;&#61;&#39;sigmoid&#39;):
        h &#61; tf.sigmoid(x)
    if(activation_function_type&#61;&#61;&#39;relu&#39;):
        h &#61; tf.nn.relu(x)
    if(activation_function_type&#61;&#61;&#39;linear&#39;):
        h &#61; x
    if(activation_function_type&#61;&#61;&#39;softmax&#39;):
        h &#61; tf.nn.softmax(x)
    return h</code></pre>
<p><strong>卷积实例</strong></p>
<p><strong>通过获取卷积层参数w,b输入网络</strong></p>
<pre class="blockcode"><code class="language-python">conv2_1 &#61; conv2d_layer(x, get_parameter(model_dir, &#39;generator_model/cv1/w&#39;),
get_parameter(model_dir, &#39;generator_model/cv1/b&#39;),
strides&#61;strides, activation_function_type&#61;&#39;lrelu&#39;, padding&#61;&#39;SAME&#39;)</code></pre>
<p><strong>2.3 反卷积层</strong></p>
<p><strong>反卷积层参数:输入、输出网络大小、卷积核w、卷积核b、激活函数</strong></p>
<pre class="blockcode"><code class="language-python">def upconv2d_layer(data, output_shape, w_init,b_init &#61; 0, strides&#61;[1, 1, 1, 1], padding&#61;&#39;SAME&#39;,
                   activation_function_type&#61;&#39;lrelu&#39;,keep_prob&#61;1,bias&#61;False):
    conv &#61; tf.nn.conv2d_transpose(data, w_init, output_shape, strides, padding&#61;padding)
    if (bias &#61;&#61; True):
        h &#61; activation_function(conv &#43; b_init, activation_function_type)
    else:
        h &#61; activation_function(conv, activation_function_type)

    if ((keep_prob &lt; 1) and keep_prob &gt; 0):
        out &#61; tf.nn.dropout(h, keep_prob)
    else:
        out &#61; h
    return out</code></pre>
<p><strong>反卷积实例</strong></p>
<pre class="blockcode"><code class="language-python">dconv2_5 &#61; upconv2d_layer(dconv2_6_in, output_shape &#61;
(n_feature,n_height,9,16), w_init&#61;get_parameter(model_dir, &#39;generator_model/upcv6/w&#39;)
                            ,strides&#61;strides,padding&#61;&#39;VALID&#39;,activation_function_type&#61;&#39;lrelu&#39;, bias&#61;False)</code></pre>
<p><strong>LSTM 目前只传入W的大小,B初始化还未解决</strong><strong>2.4 LSTM</strong></p>
分享到 :
0 人收藏
您需要登录后才可以回帖 登录 | 立即注册

本版积分规则

积分:3875789
帖子:775174
精华:0
期权论坛 期权论坛
发布
内容

下载期权论坛手机APP