A novel framework of machine learning--experience learning (EL) is proposed for observing new objects and mastering new skills, which can typically be applied to artificial intelligence robots (AIR) exploring the unknown. Unlike traditional approaches, it is not necessary to prepare a large training sample set prior to model training. Instead, an experience chain is established by continuously observing or stimulating the researched objects and recording these experiences, which is inspired by early human learning behavior. Through continuous observation and attempts, the experience chain is updated and gradually converges toward the actual output probability of the researched object. The current experience unit serves as the basis of EL judgment while past experiences can be discarded using a forget coefficient. The application mode of this framework is illustrated with two simple examples. The cat and dog generator experiment represents the self-exploration for new objects. The virtual basketball machine experiment demonstrates the ability of this method to learn a new skill and to effectively mitigate random interference. By comparison, the similarities and differences between the proposed method and the related algorithms are analyzed. Ultimately, this approach proves valuable in enabling artificial intelligence systems to study and explore the unknown territories.