Catastrophic forgetting is the sudden loss of performance when a neural network is trained on a new task or when experiencing unbalanced training. It often limits the ability of neural networks to learn new tasks. Previous work focused on the training data by changing the training regime, balancing the data, or replaying previous training episodes. Other methods used selective training to either allocate portions of the network to individual tasks or otherwise preserve prior task expertise. However, those approaches assume that network attractors are finely tuned, and even small changes to the weights cause misclassification. This fine-tuning is also believed to happen during overfitting and can be addressed with regularization. This paper introduces a method that quantifies how individual weights contribute to different tasks independent of weight strengths or previous training gradients. Applying this method reveals that backpropagation recruits all weights to contribute to a new task and that single weights may be somewhat more robust to noise than previously assumed.