One really powerful concept in physics that has showed up time and time again in my classes, is the idea of creating and evaluating models. For example, simple Newtonian physics was a model to explain the movement of objects. It turns it was proven as a wrong model later; Galilean transformation is not the "true" model of physics, relativity is.

However, that doesn't mean that Newtonian physics is cast out the window. In fact, it is still taught first before relativity

The idea of explaining the process in human learning

The reason being that actually for the vast majority of scenarios a person faces (unless they're a theoretical physicist) the Newtonian model works extremely accurately. So there's this idea in physics that you don't need to use a needlessly complicated model where a simpler model will also be highly accurate.

Granted, when put like this, it seems a bit trivial...but apply it to computer science.

When solving a computer science problem it's important to start with a "simple" solution. Then as you run into irreconcilable bugs, you slowly complicate the model to reach the desired level of accuracy (usually for programmers, that's absolute truth). This can be pretty different from how people (atleast me) typically approach the problem; instead of starting with the simplest solution, we play out the whole scenario in our head, starting outright from the most advanced solution.

We do this because we want to minimize the code we write and remove completely; advancing your model needs a complete "refactor" or "restructuring" of data; something perhaps modern text editors make hard to do:

Most code editors are designed to help debug but not build

At least for now, the best approach is to start with the simple model and advance. You'll actually probably end up saving more time and thinking in the end, because this idea of iteration is how we naturally think:

People improve solutions iteratively