First, to evaluate HyperGraphRAG, we should use evaluation as the working directory.
cd evaluationThen, we need to set openai api key in openai_api_key.txt file. (We use www.apiyi.com for LLM server.)
Last, we need download the contexts and datasets from Terabox and put them in the contexts and datasets folders.
HyperGraphRAG/
└── evaluation/
├── contexts/
├── hypertension_contexts.json
├── agriculture_contexts.json
├── cs_contexts.json
├── legal_contexts.json
└── mix_contexts.json
├── datasets/
├── hypertension/
└── questions.json
├── agriculture/
└── questions.json
├── cs/
└── questions.json
├── legal/
└── questions.json
└── mix/
└── questions.json
└── openai_api_key.txt
nohup python script_insert.py --cls hypertension > result_hypertension_insert.log 2>&1 &
# nohup python script_insert.py --cls agriculture > result_agriculture_insert.log 2>&1 &
# nohup python script_insert.py --cls cs > result_cs_insert.log 2>&1 &
# nohup python script_insert.py --cls legal > result_legal_insert.log 2>&1 &
# nohup python script_insert.py --cls mix > result_mix_insert.log 2>&1 &python script_hypergraphrag.py --data_source hypertension
# python script_standardrag.py --data_source hypertension
# python script_naivegeneration.py --data_source hypertensionpython get_generation.py --data_sources hypertension --methods HyperGraphRAG
# python get_generation.py --data_sources hypertension --methods StandardRAG
# python get_generation.py --data_sources hypertension --methods NaiveGenerationCUDA_VISIBLE_DEVICES=0 python get_score.py --data_source hypertension --method HyperGraphRAG
# CUDA_VISIBLE_DEVICES=0 python get_score.py --data_source hypertension --method StandardRAG
# CUDA_VISIBLE_DEVICES=0 python get_score.py --data_source hypertension --method NaiveGenerationpython see_score.py --data_source hypertension --method HyperGraphRAG
# python see_score.py --data_source hypertension --method StandardRAG
# python see_score.py --data_source hypertension --method NaiveGeneration