In recent years, large-scale pre-trained models have developed rapidly in general fields, but there are few practically viable technical solutions for application scenarios in specialized fields, especially in sensitive areas such as healthcare, finance, and even military domains that require precise, professional, and credible knowledge for decision-making assistance. Concentrating data on cloud servers and then efficiently fine-tuning large-scale pre-trained models is impractical for industries with high data privacy protection requirements. This paper proposes a distributed training strategy for large-scale pre-trained models that integrates federated learning strategies. First, local data is fine-tuned offline on various edge devices. Then, the parameters of the efficiently fine-tuned small-scale models (e.g., LORA) are uploaded for aggregation on the server. Finally, intelligent services and solutions are provided for complex scenarios in specialized fields.