I’ve often wondered how and why factoring was chosen as a standard for public key cryptography. There are the obvious reasons, ease of implementation etc. but given that there is no hardness proof it seems risky to standardize on it.
One possible explanation is that it is not really hard at all, and the expertise and machinery required to factor the usual sizes of numbers used in standard encryption schemes is readily available to some, but for some reason this capability is thought to be out of reach for most everyone else. This could be in the form of different algorithms with better scaling, but somehow I doubt it. This would be too hard to keep secret. I think it’s more likely that some types of special hardware implementations vastly reduce prefactors in algorithms similar to those publicly known. This would make factoring feasible for moderate sized products of primes if you had the resources to develop such special purpose hardware.
In other words, when you want to really do something properly, the scaling of the algorithm is of course important, but speeding everything up by a factor of a million can make the difference between “forget it” and “easy”.