CMAC advantages

Faster, Higher, Stronger

Scalability Straightforward to extend to millions of neurons or further
Convergence The training can always converge in one step
Function derivatives Straightforward to obtain by employing B-splines interpolation
Hardware structure Parallel pipeline structure
Memory usage Linear with respect to the number of neurons
Computational complexity O(N)

CMAC convergence

Initially least mean square (LMS) method is employed to update the weights of CMAC. The convergence of using LMS for training CMAC is sensitive to the learning rate and could lead to divergence. In 2004, I introduced a recursive least squares algorithm was to train CMAC online. It does not need to tune a learning rate. Its convergence has been proved theoretically and can be guaranteed to converge in one step. The computational complexity of this RLS algorithm is O(N3).

Qin, Ting, et al. “A learning algorithm of CMAC based on RLS.” Neural Processing Letters 19.1 (2004): 49-61.