A new method could increase the training efficiency of large language models: By leveraging idle computing time, it can double the speed of model training while preserving accuracy. “Our goal was to turn this idle time into speedup,” Qinghao Hu says.