Federated Learning · Vision Language Models · FedOps

FedOps-Multimodal Setup

Select your model and dataset, configure FL training, and download a ready-to-run federated VLM fine-tuning setup. Create a task at ccl.gachon.ac.kr/fedops/task, then deploy the same folder on server (K8s) and client (local GPU).

FedOps PyPI + Flower
LoRA / QLoRA
Plugin-based VLMs
1

Choose Model

Vision Language Model to federate

🔭
OneVision 0.5B
llava-onevision-qwen2-0.5b
RESEARCH

LLaVA-OneVision with Qwen2 0.5B backbone. Fast training, low memory. Includes pre-generated parameter_shapes.json.

Params
0.5B
GPU RAM
~3GB QLoRA
LoRA params
492 tensors
param_shapes
✓ included
📱
PhiVA 4B
nota-ai/phiva-4b-hf
MOBILE

MLC-compatible VLM optimized for Samsung Galaxy A24 edge deployment. Larger model, richer representations.

Params
4B
GPU RAM
~6GB QLoRA
MLC export
✓ ready
param_shapes
✓ included
2

Choose Dataset

Federated dataset for VQA fine-tuning

🏥
VQA-RAD
flaviagiammarino/vqa-rad

Medical radiology visual QA — 313 training samples. Validated with PhiVA (39.2% exact match).

🌐
VQAv2
HuggingFaceM4/VQAv2

General visual QA benchmark — large scale, diverse image-question pairs across many domains.

🔬
PathVQA
flaviagiammarino/path-vqa

Pathology visual QA — histology and microscopy images with yes/no and open-ended clinical questions (~19.7K train).

🧠
SLAKE
mdwiratathya/SLAKE-vqa-english

Multi-organ medical VQA — brain, chest, abdomen across X-ray, CT, MRI. English subset (~4.9K train, 1K test).

📊
ChartQA
HuggingFaceM4/ChartQA

Chart and figure visual QA — reasoning over bar charts, line graphs, and pie charts (~18K train).

📄
DocVQA
HuggingFaceM4/DocVQA

Document visual QA — scanned business and industrial documents with information extraction questions (~10K train).

3

FL Configuration

Federated learning hyperparameters

Total aggregation rounds

Gradient steps per FL round

Higher = more params, more VRAM

4-bit recommended for T4 GPU

Clients required per round

FedMAP solves a QP each round to minimise directional variance — better for cross-domain non-IID

AdamW learning rate

4

Preview & Download

Generated config.yaml and zip contents

Select a model and dataset above to preview config
conf/config.yaml
# Select model and dataset above to generate config
📦 Zip contents
py server_main.py
py client_main.py
py client_manager_main.py
py models.py
py data_preparation.py
py generate_paramshape.py
txt requirements.txt
sh setup.sh
yaml conf/config.yaml
json parameter_shapes.json

How to use

📥
1. Download & Setup
Unzip FedOps-Multimodal-Setup.zip
Then run the included setup script:
bash setup.sh
(Handles the fedops↔transformers version conflict automatically)
🖥️
2. Start Server (K8s)
First, create a task at:
Then copy folder to K8s pod and run:
python server_main.py
Server listens on :8080
🤖
3. Start Client (GPU)
python client_main.py
python client_manager_main.py
Set FEDOPS_PARTITION_ID=N