<p>cp16_Model Sequential_Output_Hidden_Recurrent NNs_LSTM_aclImdb_IMDb_Embed_token_py_function_GRU_Gate:<br><a href="https://blog.csdn.net/Linli522362242/article/details/113846940">https://blog.csdn.net/Linli522362242/article/details/113846940</a></p>
<pre class="blockcode"><code class="language-python">from tensorflow.keras.layers import GRU
model = Sequential()
model.add( Embedding(10000,32) )
model.add( GRU(32, return_sequences=True) )
model.add( GRU(32) )
model.add(Dense(1))
model.summary()</code></pre>
<p><img alt="" height="202" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-a46661a1449a16e5121669309ab19e01.png" width="396"></p>
<h3>Building an RNN model for the sentiment analysis task</h3>
<p> Since we have very long sequences, we are going to use an LSTM(Long short-term memory) layer to account for long-term effects. In addition, we will <span style="color:#7c79e5;"><strong>put the LSTM layer inside a Bidirectional([badrknl]双向的) wrapper</strong></span>, which will make the recurrent layers pass through the input sequences from both directions, start to end, as well as the reverse direction:</p>
<p> A <strong>Bidirectional LSTM</strong>, or <strong>biLSTM</strong>, is a sequence processing model that consists of <span style="color:#f33b45;"><strong>two LSTMs</strong></span>: <span style="color:#7c79e5;"><strong>one taking the input in a forward direction</strong></span>, and the other in a<span style="color:#7c79e5;"><strong> backwards direction</strong></span>. BiLSTMs effectively<span style="color:#7c79e5;"><strong> increase the amount of information available to the network, improving the context available to the algorithm</strong></span> (e.g. knowing what words immediately follow <em>and</em> precede a word in a sentence)<br><img alt="" height="278" src="https://beijingoptbbs.oss-cn-beijing.aliyuncs.com/cs/5606289-d38712269bbf215d120e94cd6d5cbd20.png" width="585"></p>
<p><a href="https://keras.io/layers/wrappers/#bidirectional">Bidirectional</a> layer wrapper provides the implementation of Bidirectional LSTMs in Keras</p>
<pre class="blockcode"><code class="language-python">tf.keras.layers.Bidirectional(
layer, merge_mode="concat", weights=None, backward_layer=None, **kwargs
)</code></pre>
<p>Bidirectional wrapper for RNNs.</p>
<p><strong>Arguments</strong></p>
<ul><li><strong>layer</strong>: <code>keras.layers.RNN</code> instance, such as <code>keras.layers.LSTM</code> or <code>keras.layers.GRU</code>. It could also be a <code>keras.layers.Layer</code> instance that meets the following criteria:
<ol><li>Be a sequence-processing layer (accepts 3D+ inputs).</li><li>Have a <code>go_backwards</code>, <code>return_sequences</code> and <code>return_state</code> attribute (with the same semantics as for the <code>RNN</code> class).</li><li>Have an <code>input_spec</code> attribute.</li><li>Implement serialization via <code>get_config()</code> and <code>from_config()</code>. Note that the recommended way to create new RNN layers is to write a custom RNN cell and use it with <code>keras.layers.RNN</code>, instead of subclassing <code>keras.layers.Layer</code> directly.</li></ol></li><li><span style="color:#7c79e5;"><strong>merge_mode</strong></span>: Mode by which outputs of the forward and backward RNNs will be combined. One of {'sum', 'mul', 'concat', 'ave', None}. If None, the outputs will not be combined, they will be returned as a list. <strong><span style="color:#7c79e5;">Default value is 'concat'.</span></strong></li><li><strong>backward_layer</strong>: Optional <code>keras.layers.RNN</code>, or <code>keras.layers.Layer</code> instance to be used to handle backwards input processing.<span style="color:#7c79e5;"><strong> If <code>backward_layer</code> is not provided, the layer instance passed as the <code>layer</code> argument will be used to generate the backward layer automatically</strong></span>. Note that the provided<strong> <code>backward_layer</code> layer should have properties matching those of the <code>layer</code> argument</strong>, in particular it should have the same values for <code>stateful</code>, <code>return_states</code>, <code>return_sequence</code>, etc. In addition, <code>backward_layer</code> and <code>layer</code> should have different <code>go_backwards</code> argument values. A <code>ValueError</code> will be raised if these requirements are not met</li></ul>
<p>It takes <span style="color:#7c79e5;"><strong>a recurrent layer (first LSTM layer) </strong></span>as an argument and you can also specify the merge mode, that describes how forward and backward outputs should be merged <strong>before being passed on to the coming layer.</strong> The options are:</p>
<p><strong>–</strong> ‘<em>sum</em>‘: The results are added together.</p>
<p><strong>–</strong> ‘<em>mul</em>‘: The results are multiplied together.</p>
<p><strong>–<span style="color:#7c79e5;"> ‘<em>concat</em>‘(the default): The results are concatenated together ,providing double the number of outputs to the next layer.</span></strong></p>
<p><strong>–</strong> ‘<em>ave</em>‘: The average of the results is taken.</p>
<p><strong>###################</strong><br><span style="color:#7c79e5;"><strong> embedding_dim:</strong></span><br><img alt="" height="423" src="https://beijingoptbbs.os |
|