In the pursuit of machine intelligence, "Learning", i.e. computer algorithms improving automatically with the experience, is central to our research focus. In today’s era of Information Technology, there is an abundance of data being generated by different sources. The development of computations based methodologies for learning data representation remains as our research aim. The data inherit uncertainties from their sources and thus prompting the researchers to apply probability and fuzzy theories in machine learning as means to handle uncertainties. Probabilistic directed generative models, being potentially capable of uncertainty assessment, have been widely applied in machine learning. The data-driven parametric models have also received the most attention of the fuzzy/probabilistic machine learning researchers. However, the parametric approach suffers from the issues arising from a lack of knowledge regarding the optimal model structure. In contrast, the nonparametric approach facilitates the model structure to grow as necessary to fit the complexity of the data. Despite numerous research studies on deep learning, some of the fundamental issues that remained unaddressed are:
- The propagation of non-statistical-uncertainty across the layers of a deep model needs to be mathematically analyzed.
- The learning of deep models requires a large computational time as a large number of model parameters are being estimated using ad-hoc iterative gradient-descent based numerical algorithms. The learning time for a deep model needs to be reduced.