Is your concern that our computational systems can’t determine the exact solution, because of floating point precision? Maths generally finds exact, analytical equations to describe problems. Sometimes these are approximated, but they remain analytical, and can sometimes have exact solutions (e.g. by rearranging an equation). Some equations when you try to find a solution, you have to try numerical solutions, which usually means re-writing the equation, typically in a differential form, which adds another layer of approximation. And yes then the initial conditions or boundary conditions or the computer precision can affect the answer you get if the equation is sensitive to them (e.g. mathematically chaotic). That’s why ensembles are run for things we know are sensitive to this (weather / climate / flood forecasting models).
But the governing, analytical equations remain exact, in the terms/case that the equation was derived for. And I would argue most equations even with approximations work well enough to understand the reality in which we live, within what they were designed / developed for.
We have other ways of viewing problems, than just space-time. In fluid dynamics, the detail of flow through a channel can be extremely complex, but in most situations you can take the mean flow, and variability from that mean, and have a good estimate and predictor of flow, without having to solve the full equations. Most engineering simplifies equations to a set of parameters that can be fit to a situation. Many applications use phase-space rather than time (think of a predator-prey relationship and plot population of foxes and rabbits against each other, rather than in time). Or as quantum physics is mentioned, stochastics. The point is the model/equation is an exact explanation of something, but only within the realms of what is was derived for (scale, accuracy and knowns).