Federated Learning · Vision Language Models · FedOps

FedopsTune Hub

Select your model and dataset, configure FL training, and download a ready-to-run federated VLM fine-tuning setup. Create a task at ccl.gachon.ac.kr/fedops/task, then deploy the same folder on server (K8s) and client (local GPU).

FedOps PyPI + Flower
LoRA / QLoRA
Plugin-based VLMs
1

Choose Model

Vision Language Model to federate

🔭
OneVision 0.5B
llava-onevision-qwen2-0.5b
RESEARCH

LLaVA-OneVision with Qwen2 0.5B backbone. Fast training, low memory. Includes pre-generated parameter_shapes.json.

Params
0.5B
GPU RAM
~3GB QLoRA
LoRA params
492 tensors
param_shapes
✓ included
📱
PhiVA 4B
nota-ai/phiva-4b-hf
MOBILE

MLC-compatible VLM optimized for Samsung Galaxy A24 edge deployment. Larger model, richer representations.

Params
4B
GPU RAM
~6GB QLoRA
MLC export
✓ ready
param_shapes
✓ included
2

Choose Dataset

Federated dataset for VQA fine-tuning

🏥
VQA-RAD
flaviagiammarino/vqa-rad

Medical radiology visual QA — 313 training samples. Validated with PhiVA (39.2% exact match).

🌐
VQAv2
HuggingFaceM4/VQAv2

General visual QA benchmark — large scale, diverse image-question pairs across many domains.

📈
Finance VQA
sujet-ai/sujet-finance-qa-vision

Financial chart and document visual QA — 100k samples for financial analysis tasks.

⚖️
Legal DocVQA
Rickkosse/dutch-legal-docvqa

Dutch legal document visual QA — scanned legal documents with structured question answering.

3

FL Configuration

Federated learning hyperparameters

Total aggregation rounds

Gradient steps per FL round

Higher = more params, more VRAM

4-bit recommended for T4 GPU

Clients required per round

AdamW learning rate

4

Preview & Download

Generated config.yaml and zip contents

Select a model and dataset above to preview config
conf/config.yaml
# Select model and dataset above to generate config
📦 Zip contents
py server_main.py
py client_main.py
py client_manager_main.py
py models.py
py data_preparation.py
py generate_paramshape.py
txt requirements.txt
yaml conf/config.yaml
json parameter_shapes.json

How to use

📥
1. Download & Setup
Unzip FedopsTune.zip
pip install -r requirements.txt
(installs fedops, flwr, torch, transformers etc. automatically)
pip install fedops-vlm-framework
(available on PyPI — no source clone needed)
🖥️
2. Start Server (K8s)
First, create a task at:
Then copy folder to K8s pod and run:
python server_main.py
Server listens on :8080
🤖
3. Start Client (GPU)
python client_main.py
python client_manager_main.py
Set FEDOPS_PARTITION_ID=N