I am a 4th-year Ph.D. candidate in Department of Computer Science and Engineering, The Hong Kong University of Science and Technology. I am so fortunate to be advised by and collaborate with Prof. Wei Wang. I also work closely with Prof. Bo Li.
My current research explores system design and implementation for privacy-preserving machine learning via federated learning and differential privacy. I am also interested in serverless computing and big data analytics.
Prior to HKUST, I got my Bachelor's degree from Zhejiang University in Computer Science and Technology in 2019. I then also spent time interning at University of California San Diego and Huawei.
Zhifeng Jiang,
Wei Wang,
Baochun Li,
Bo Li
In the proceedings of ACM Symposium on Cloud Computing (SoCC) (2022)
We present an asynchronous FL system for accelerated training. To avoid incurring excessive resource cost and stale training computation, we use a novel scoring mechanism to select participants. We also adapt the pace of model aggregation to dynamically bound the progress gap between the selected clients and the server.
Minchen Yu,
Zhifeng Jiang,
Hok Chun Ng,
Wei Wang,
Ruichuan Chen,
Bo Li
IEEE International Conference on Distributed Computing Systems (ICDCS) (2021)
Best Paper Runner-Up
Paper Code HTML RecordingWe present a serverless-based model serving system that automatically partitions a large model across multiple serverless functions. It employs two novel model partitioning algorithms that respectively achieve latency-optimal serving and cost-optimal serving with SLO compliance.
Zhifeng Jiang,
Wei Wang,
Bo Li,
Qiang Yang
IEEE Transactions on Big Data (TBD) (2022)
We survey recent works in addressing the challenges emerging from synchronous FL training and present them following a typical training workflow through three phases: client selection, configuration, and reporting. We also review measurement studies and benchmarking tools.
Zhifeng Jiang,
Wei Wang,
Ruichuan Chen
We present a distributed differentially private FL framework. It uses a novel 'add-then-remove' scheme where a required noise level can be enforced in each FL training round even though some sampled clients may drop out in the end. It also runs as a distributed pipeline architecture that optimally pipelines communication and computation.
Peng Ye,
Zhifeng Jiang,
Wei Wang,
Bo Li,
Baochun Li
We focus on client data with binary features, and show that unless the feature space is exceedingly large, we can precisely reconstruct the binary features in practice with a robust search-based attack algorithm. We also present a defense mechanism that overcomes such binary feature vulnerabilities by misleading the adversary to search for fabricated features.
Zhifeng Jiang,
Wei Wang,
Yang Liu
We propose a homomorphic encryption scheme for efficient cross-silo FL. Compared to commonly used HE schemes, we drop the asymmetric-key design and only involves modular addition operations with random numbers. The computation efficiency is further optimized when being combined with sparsification techniques.
Zhifeng Jiang,
Mohammadkazem Taram,
Ashish Venkat,
Dean M. Tullsen
To thwart Return-Oriented Programming attacks with microarchitecture-level modification, we propose a CPU plugin that enables context-sensitive decoding for securely backing up and validating return addresses. We evaluate its defense capability and runtime efficiency in the gem5 simulator.
Journal Reviewer: IEEE TMC
Shadow Program Committee: EuroSys'23
Artifact Evaluation Committee: SOSP'21, OSDI'22, ATC'22