Latent variables (LVs) represent the origin of many scientific, social, and medical phenomena. While models with only observed variables (OVs) have been well studied, learning a latent variable model (LVM) allowing both types of variables is difficult. Therefore, the assumption of no LVs is usually made, but modeling by ignoring LVs leads to learning a partial/wrong and misleading model that misses the true realm. In recent years, progress has been made in learning LVMs from data, but most algorithms have strong assumptions limiting their scope. Moreover, LVs by nature often change temporally, adding to the challenge and complexity of learning, but current LVM learning algorithms do not account for this. We propose learning locally a causal model in each time slot, and then local to global learning over time slices based on probabilistic scoring and temporal reasoning to transfer the local graphs into a latent dynamic Bayesian network with intra- and inter-slice edges showing causal interrelationships among LVs and between LVs and OVs. Examined using data generated synthetically and of ALS and Alzheimer patients, our algorithm demonstrates high accuracy regarding structure learning, classification, and imputation, and less complexity.