Experience Replay (ER) improves data efficiency in Deep Reinforcement Learning by allowing the agent to revisit past experiences that could contribute to the current policy learning. A recent method called COMPact Experience Replay (COMPER) seeks to improve ER by reducing the required number of experiences for agent training regarding the total accumulated rewards in the long run. This method can approximate good policies on Atari 2600 games on the Arcade Learning Environment (ALE) from a considerably smaller number of frame observations to achieve similar results obtained by DQN-based methods in the literature, which generally demand millions of video frames. Therefore, we conduct a detailed analysis of its components to verify and understand how they operate and how they could improve data efficiency on the Deep Deterministic Policy Gradient (DDPG) algorithm applied to physical control problems in Mujoco's environments. We present an extension of DDPG, named COMPER-DDPG, which uses components from COMPER and achieves better results than the original method in almost all environments used in the experiments with only 50,000 iterations.