Skip to main content
Fig. 2 | Chinese Medicine

Fig. 2

From: Machine learning in TCM with natural products and molecules: current status and future perspectives

Fig. 2

The basic structure of Elman RNN, consists of input, recurrent, hidden, and output layers. U, V and W are the weights of input layer, output layer, recurrent layer separately. Parameter b represents bias term of hidden layer, b’ represents bias term of output layer. Hidden layer: \({h}_{it}=f(U*{X}_{nt}+W*{h}_{i\left(t-1\right)}+b)\). Output layer:\({O}_{jt}=V*{h}_{it}+b{\prime }\)

Back to article page