File size: 3,061 Bytes
bbaf249
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
import torch
import os
import tqdm
import glob
import multiprocessing
from functools import partial

from time_r1.utils.clip_service import SiglipClient
from time_r1.utils.qwen_vl_utils import fetch_video
from time_r1.utils.io import load_jsonl
import os

SIGLIP_URL = os.environ.get("SIGLIP_URL", "grpc://127.0.0.1:51000")
clip_model = SiglipClient(base_url=SIGLIP_URL)


def process_single_video(video_path):
    # ele = {
    #     "video": video_path,
    #     "fps": fps,
    #     "max_frames": max_frames,
    #     "total_pixels": max_frames * 1024 * 28 * 28,
    # }
    # video, sample_fps = fetch_video(ele, return_video_sample_fps=True)
    # print(video_path)
    try:
        video = torch.load(video_path + ".frame_cache")["frame_tensor"]
        features = clip_model.encode_images(video)
        print(features.shape, video.shape)
        # save features
        torch.save(features, video_path + ".feature_cache")
    except Exception as e:
        print(f'{e}, {video_path}')


def prepare_feature_cache(video_root, dataset_path=None, num_workers=8, overwrite=False):
    if dataset_path is not None:
        # 修改这里:直接使用json.load而不是load_jsonl
        import json
        with open(dataset_path, 'r', encoding='utf-8') as f:
            video_data = json.load(f)  # 这是JSON数组
        
        # 提取video_path字段,并去重(同一个视频可能有多条记录)
        video_paths = set()  # 使用set去重
        for v in video_data:
            if "video_path" in v:
                video_paths.add(v["video_path"])
            elif "video" in v:
                # 如果有video字段,可能需要拼接路径
                video_path = os.path.join(video_root, v["video"])
                video_paths.add(video_path)
        
        video_list = list(video_paths)

    if not video_list:
        print(f"No MP4 videos found in {video_root}")
        return
    if not overwrite:
        print("skipping videos that already have feature cache")
        num_total = len(video_list)
        video_list = [v for v in video_list if not os.path.exists(v + ".feature_cache")]
        num_skipped = num_total - len(video_list)
        print(f"skipped {num_skipped} videos")

    if num_workers is None:
        num_workers = multiprocessing.cpu_count()  # Default to using all available CPU cores
    
    print(f"Found {len(video_list)} videos. Starting processing with {num_workers} workers...")

    # Use a multiprocessing Pool to process videos in parallel
    with multiprocessing.Pool(processes=num_workers) as pool:
        # Using tqdm with pool.imap_unordered for progress bar and efficient iteration
        # We wrap process_single_video if it needs more arguments or if we want to handle results
        # For this case, process_single_video only takes video_path
        list(tqdm.tqdm(pool.imap_unordered(process_single_video, video_list), total=len(video_list)))

    print("All videos processed.")


if __name__ == "__main__":
    import fire
    fire.Fire(prepare_feature_cache)