A rather technical article oriented towards big-system security types, the link covers the basics of expert systems as a partial replacement for human decision-making.
While the author focuses on reducing human operators’ workloads, I find this interesting for a different angle — reducing human vulnerabilities. An expert system isn’t subject to the same cognitive biases that allow an adversary to amplify irrationality.
Applied to big systems, of course, this all makes a wonderful recipe for totalitarian dystopia. Ensuring authority has flaws is a defense for the rest of us.
On the other hand, if you’re trying to protect yourself against a clever attacker, there’s possibility here. An expert system need not be a complex program, it could just as simply be a set of written rules and procedures you follow.
The most rudimentary case is a daily water-level check: if you live on a river, start packing if the water level rises above a certain mark. Left to gut instinct, you might keep saying “it’s fine, it’s been rising very slowly” until it’s too late.
(I used to keep a very long to-do list for similar reasons, cranking through it to keep myself getting things done even if I didn’t feel like doing them. Sadly the method never worked quite as well as I’dve liked.)