Whilst I think it would be much harder to happen today computer controlled systems are not infallible.
It’s a similar design process to the oil and gas industry, called SIL/LOPA.
You start with a process and determine the probabiblity of an event (simplistically say: opperator sets reactor to 150%), happens once a year due to human error when he meant to type 15%.
You then have a PC that’s supposed to warn him, one in 10 he ignores it.
You then have a relief valve thats meant to prevent overpressure, one in 10 it fails to lift.
So your probability of it blowing up is 1in100 years. Which is probably acceptable for something like a water pump at a utility company when the only cost of the failure is a loss of face.
That’s your LOPA (layers of protection analysis).
Then you add SIL systems (safety integrity levels).
You can’t trust a temperature probe not to fail, or a control valve, so you add 3 of them (voting 2oo3) to the reactor, and 2x shut down valves on the inlet, so now you’ve got say a 1 in 100 year failure rate for the SIL system, 1×10^2 = SIL2. You can go SIL 3,4,5 etc, either with more sensors and valves or independant systems.
Usually nuclear plants are a level higher than oil refineries/petrochemicals, which is why almost every year sees one of them explode, but we only get a nuclear problem every 10 or so (3 Mile Island, Winscale, Chernobyl, Fukushima).