That is a gap that tokenization is meant to fill. The technology works behind the scenes of a digital transaction: Customers still put in their card number, but software then transforms that information into a one-time token — a randomly generated code — that is sent through the payment-processing chain. Thieves who intercept the code can do little with it without the means to unscramble the token.
This article in The New York Times blog is another example of fallacy of tokenization.
This description is untrue. Tokenization does not work this way. In order to get authorization for the credit card charge, the point of sale system still needs to send the full card data (the content of magnetic track 1 or 2) to the payment processing server. Such data cannot be just "transformed into a one-time randomly generated token" because the server system must be able to recognize and process it. So the card data should be encrypted using another technology called point-to-point encryption (P2PE) which is different from tokenization. Only after the card data is decrypted and processed at the payment processor's data center, it can be tokenized using the method described above, and the resulting token can be returned to the point of sale system. There are P2PE systems that are able to produce the format-preserving encryption so the resulting encrypted data looks similar to the original input so maybe that's created a confusion. But in any case, the data produced by such system is not "randomly generated", and it's not a token, and it's done in hardware rather than software, and the system is called P2PE and not tokenization. Unfortunately, such misunderstanding and overestimation of tokenization is very common perception.