nLab
neural network

Contents

Contents

Idea

A neural network is a class of functions used in both supervised and unsupervised? learning to approximate a correspondence between samples in a dataset and their associated labels.

Definition

Definition

Where K dK\subset \mathbb{R}^d is compact, {T L} LN\{T_L\}_{L\leq N \in \mathbb{N}} a finite set of affine maps such that T L(x)=W L,x+b LT_L(x) = \langle W_L,x\rangle + b_L where W LW_L is the L thL^{th} layer weight matrix and b Lb_L the L thL^{th} layer bias, g:g:\mathbb{R}\to\mathbb{R} a non-linear activation function, a neural network is a function f:K d mf:K\subset \mathbb{R}^d \to \mathbb{R}^m, such that on input xx, computes the composition:

f(x)=(T LgT L1gT 1)(x)f(x) = (T_L\circ g \circ T_{L-1}\circ g \circ \dots \circ T_1)(x)

where gg is applied component-wise.

Typically, T 1T_1 is called the input layer, T LT_L the output layer, and layers T 2T_2 to T L1T_{L-1} are hidden layers. In particular, a real-valued 1-hidden layer neural network with computes:

f(x)=b+ i=1 na ig(W i,x+b)f(x) = b' + \sum_{i=1}^n a_i g(\langle W_i, x\rangle + b)

where a=(a 1,,a n)a = (a_1, \dots, a_n) is the output weight, bb' the output bias, W iW_i the i thi^{th} row of the hidden weight matrix, and bb the hidden bias. Here, the hidden layer is nn-dimensional.

References

On the learning algorithm as gradient descent of the loss functional:

On the learning algorithm as analogous to the AdS/CFT correspondence:

  • Yi-Zhuang You, Zhao Yang, Xiao-Liang Qi, Machine Learning Spatial Geometry from Entanglement Features, Phys. Rev. B 97, 045153 (2018) (arxiv:1709.01223)

  • W. C. Gan and F. W. Shu, Holography as deep learning, Int. J. Mod. Phys. D 26, no. 12, 1743020 (2017) (arXiv:1705.05750)

  • J. W. Lee, Quantum fields as deep learning (arXiv:1708.07408)

  • Koji Hashimoto, Sotaro Sugishita, Akinori Tanaka, Akio Tomiya, Deep Learning and AdS/CFT, Phys. Rev. D 98, 046019 (2018) (arxiv:1802.08313)

A category theoretic treatment of back propagation:

  • Brendan Fong, David Spivak, Rémy Tuyéras, Backprop as Functor: A compositional perspective on supervised learning, (arXiv:1711.10455)

Last revised on January 14, 2020 at 17:20:07. See the history of this page for a list of all contributions to it.