Large artificial intelligence language models, like those used to run the popular ChatGPT chatbot, can be reduced in size by more than half without losing much accuracy. This could save large amounts of energy and allow people to run the models at home, rather than in huge data centres.
Many recent advances in artificial intelligence models have come from scaling up the number of parameters: the values that each model tunes to produce outputs. OpenAI’s GPT-3, a version of which…