With the advancement of language models (LMs), their exposure to private data is increas- ingly inevitable, and their deployment (espe- cially for smaller ones) on personal devices, such as PCs and smartphones, has become a prevailing trend. In contexts laden with user information, enabling models to both safeguard user privacy and execute commands efficiently emerges as an essential research imperative. In this paper, we propose CoGenesis, a collabo- rative generation framework integrating large (hosted on cloud infrastructure) and small mod- els (deployed on local devices) to address pri- vacy concerns logically. Initially, we design a pipeline to create personalized writing instruc- tion datasets enriched with extensive context details as the testbed of this research issue. Sub- sequently, we introduce two variants of CoGe- nesis based on sketch and logits respectively. Our experimental findings, based on our synthe- sized dataset and two additional open-source datasets, indicate that: 1) Large-scale models perform well when provided with user context but struggle in the absence of such context. 2) While specialized smaller models fine-tuned on the synthetic dataset show promise, they still lag behind their larger counterparts. 3) Our CoGenesis framework, utilizing mixed-scale models, showcases competitive performance, providing a feasible solution to privacy issues.