Machine learning and deep neural networks have revolutionized various fields, most obvious examples are computer vision and natural language processing. Apart from the surging sizes of sophisticated models, an emerging trend is to go down the opposite route of deploying lightweight models on the edge (terminal or user end) for relatively simple AI tasks. This is named edge AI which is often constrained to run under restrictive compute and storage resources. In this talk, we will explore the latest theory in neural network modeling that allows the total avoidance of AI training that used to be slow, daunting or even impossible for the edge. Specifically, we will scratch the surface of the neural tangent kernel, and try to establish (well…. qualitatively) the equivalence of data and network, such that once the data are ready, the network is instantly ready, too.