Select your model and dataset, configure FL training, and download a ready-to-run federated VLM fine-tuning setup. Create a task at ccl.gachon.ac.kr/fedops/task, then deploy the same folder on server (K8s) and client (local GPU).
Vision Language Model to federate
LLaVA-OneVision with Qwen2 0.5B backbone. Fast training, low memory. Includes pre-generated parameter_shapes.json.
MLC-compatible VLM optimized for Samsung Galaxy A24 edge deployment. Larger model, richer representations.
Federated dataset for VQA fine-tuning
Medical radiology visual QA — 313 training samples. Validated with PhiVA (39.2% exact match).
General visual QA benchmark — large scale, diverse image-question pairs across many domains.
Financial chart and document visual QA — 100k samples for financial analysis tasks.
Dutch legal document visual QA — scanned legal documents with structured question answering.
Federated learning hyperparameters
Total aggregation rounds
Gradient steps per FL round
Higher = more params, more VRAM
4-bit recommended for T4 GPU
Clients required per round
AdamW learning rate
Generated config.yaml and zip contents
# Select model and dataset above to generate config
FedopsTune.zippip install -r requirements.txtpip install fedops-vlm-frameworkpython server_main.py:8080python client_main.pypython client_manager_main.pyFEDOPS_PARTITION_ID=N