Sort of. IIRC (which may be unlikely) Auto1111 has the base model in the text to image plane, but if you want to use the refiner that is a separate IMG2IMG step/tab. Which would be a pain in the ass imo.
The "Comfy" tool is node based and you can string both together which is nice. Although if you aren't confident in your images you don't need the refiner for a bit.
I think the diffusers UIs (like Invoke and VoltaML) are going to implement the refiner soon since HF already has a pipeline for it.
Comfy and A1111 are based around the original SD StabilityAI code, but the implementation must be pretty similar if they could add the base model so quickly.
The "Comfy" tool is node based and you can string both together which is nice. Although if you aren't confident in your images you don't need the refiner for a bit.