To address the influence of limited network bandwidth of edge devices puffy spa headband on the communication efficiency of federated learning, and efficiently transmit local model update to complete model aggregation, a communication-efficient federated learning method via redundant data elimination was proposed.The essential reasons for generation of redundant update parameters and according to non-IID properties and model distributed training features of FL were analyzed, a novel sensitivity and loss function tolerance definitions for coreset was given, and a novel federated coreset construction algorithm was proposed.Furthermore, to fit the extracted coreset, a novel distributed adaptive sparse network model evolution mechanism was designed to dynamically adjust the structure and the training model size before each global training iteration, which reduced the number of communication bits between edge devices and the server while also guarantees the training model accuracy.Experimental results show that the proposed method achieves 17% reduction in communication bits transmission while 2014 dodge ram 1500 fender flares only 0.5% degradation in model accuracy compared with state-of-the-art method.