I think it's safe to say that the AlphaZero software isn't just about "more power to process more" -- end of, or otherwise. Indeed, the point (made in the recent paper) was that, if anything, there was less computing power thrown at it ultimately (once the neural network had been trained for long enough, at least), as compared to equivalent programs in chess such as Stockfish that do rely in large part on computing power.
I don't think AlphaZero or equivalent is yet at the stage where it's capable of being used in any meaningful time on a single machine, but then it's still an active research project and in the long run it's possible, or even probable, that there will come a point where similar, flexible, software is ready to solve many such problems without just having stupidly fast processors.
Finally, on the subject of Go, the lesser-known program "LeelaZero" was written towards the end of last year, and doesn't rely on TPUs to train -- instead making use of multiple home computers to achieve the same effect. It took a little longer (a few bugs) but once again the effect is remarkable and after about a month the program was well beyond most strong amateurs even with only a couple of thousand simulations per move.
All of this is by the by, but the point is that AI and machine learning are indeed progressing beyond the "old days" -- and I am surprised, to say the least, that you are either unaware of this or rather arrogantly dismissive of said progress. One might almost wonder if you think that the only difference between computing now and computing when you started was that we've stopped using punch cards.