You make a LOT of great points.
In the late 80s I was finishing my EE degree. One of my senior projects was writing an "AI" (LOL) program in Prolog for designing electronic filters.
It was fun and worked well. The computer asked you questions and then designed the right filter for you. It was of course very limited as this was done over a semester with 80s technology.
So far, all I've seen from ChatGPT to Midjourney are basically the same idea just with 100³ more horsepower thanks to Moore's law. AFAIK there is no "AI" that would respond to the inputs with something like, "Jack, you don't need a filter. I've created a new electronic device that eliminates the need for a filter."
I know this isn't the rabbit hole we started down, but IMO the current "AI" path we are on will lead to a huge loss for mankind, because our Tech lords are "saving" on implementation, but will only ever get derivatives of what has already been made. Without humans creating (and failing most of the time) we won't make new discoveries.
Recombining the old in new ways is efficient, but it doesn't move science forward.
If we survive the next 30 years without civilization collapsing for other reasons, I think we risk losing an entire generation of ideas to the spinning wheels of LLM fake-AI.