AI models make stuff up. How can hallucinations be controlled?

It is hard to do so without also limiting models’ power

AI models make stuff up. How can hallucinations be controlled?
It is hard to do so without also limiting models’ power