calender commited on
Commit
a79f504
Β·
verified Β·
1 Parent(s): fee75e1

Upload 5 files

Browse files
Files changed (5) hide show
  1. .gitattributes +23 -35
  2. .gitignore +39 -0
  3. README_SPACES.md +119 -0
  4. app.py +326 -0
  5. requirements.txt +32 -0
.gitattributes CHANGED
@@ -1,35 +1,23 @@
1
- *.7z filter=lfs diff=lfs merge=lfs -text
2
- *.arrow filter=lfs diff=lfs merge=lfs -text
3
- *.bin filter=lfs diff=lfs merge=lfs -text
4
- *.bz2 filter=lfs diff=lfs merge=lfs -text
5
- *.ckpt filter=lfs diff=lfs merge=lfs -text
6
- *.ftz filter=lfs diff=lfs merge=lfs -text
7
- *.gz filter=lfs diff=lfs merge=lfs -text
8
- *.h5 filter=lfs diff=lfs merge=lfs -text
9
- *.joblib filter=lfs diff=lfs merge=lfs -text
10
- *.lfs.* filter=lfs diff=lfs merge=lfs -text
11
- *.mlmodel filter=lfs diff=lfs merge=lfs -text
12
- *.model filter=lfs diff=lfs merge=lfs -text
13
- *.msgpack filter=lfs diff=lfs merge=lfs -text
14
- *.npy filter=lfs diff=lfs merge=lfs -text
15
- *.npz filter=lfs diff=lfs merge=lfs -text
16
- *.onnx filter=lfs diff=lfs merge=lfs -text
17
- *.ot filter=lfs diff=lfs merge=lfs -text
18
- *.parquet filter=lfs diff=lfs merge=lfs -text
19
- *.pb filter=lfs diff=lfs merge=lfs -text
20
- *.pickle filter=lfs diff=lfs merge=lfs -text
21
- *.pkl filter=lfs diff=lfs merge=lfs -text
22
- *.pt filter=lfs diff=lfs merge=lfs -text
23
- *.pth filter=lfs diff=lfs merge=lfs -text
24
- *.rar filter=lfs diff=lfs merge=lfs -text
25
- *.safetensors filter=lfs diff=lfs merge=lfs -text
26
- saved_model/**/* filter=lfs diff=lfs merge=lfs -text
27
- *.tar.* filter=lfs diff=lfs merge=lfs -text
28
- *.tar filter=lfs diff=lfs merge=lfs -text
29
- *.tflite filter=lfs diff=lfs merge=lfs -text
30
- *.tgz filter=lfs diff=lfs merge=lfs -text
31
- *.wasm filter=lfs diff=lfs merge=lfs -text
32
- *.xz filter=lfs diff=lfs merge=lfs -text
33
- *.zip filter=lfs diff=lfs merge=lfs -text
34
- *.zst filter=lfs diff=lfs merge=lfs -text
35
- *tfevents* filter=lfs diff=lfs merge=lfs -text
 
1
+ *.md text eol=lf
2
+ *.py text eol=lf
3
+ *.txt text eol=lf
4
+ *.json text eol=lf
5
+ *.yml text eol=lf
6
+ *.yaml text eol=lf
7
+
8
+ # Model files
9
+ *.pth binary
10
+ *.bin binary
11
+ *.pkl binary
12
+ *.h5 binary
13
+
14
+ # Images
15
+ *.png binary
16
+ *.jpg binary
17
+ *.jpeg binary
18
+ *.gif binary
19
+
20
+ # Archives
21
+ *.zip binary
22
+ *.tar.gz binary
23
+ *.tgz binary
 
 
 
 
 
 
 
 
 
 
 
 
.gitignore ADDED
@@ -0,0 +1,39 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Model files (not needed for Spaces deployment - loaded from Hub)
2
+ model/
3
+ *.pth
4
+ *.bin
5
+ *.pkl
6
+ *.h5
7
+
8
+ # Python cache
9
+ __pycache__/
10
+ *.py[cod]
11
+ *$py.class
12
+ *.so
13
+
14
+ # Jupyter Notebook
15
+ .ipynb_checkpoints
16
+
17
+ # Environment files
18
+ .env
19
+ .venv
20
+ env/
21
+ venv/
22
+
23
+ # IDE files
24
+ .vscode/
25
+ .idea/
26
+ *.swp
27
+ *.swo
28
+
29
+ # OS files
30
+ .DS_Store
31
+ Thumbs.db
32
+
33
+ # Logs
34
+ *.log
35
+ logs/
36
+
37
+ # Temporary files
38
+ *.tmp
39
+ *.temp
README_SPACES.md ADDED
@@ -0,0 +1,119 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ title: ConvNeXt CheXpert Classifier with GradCAM
3
+ emoji: 🫁
4
+ colorFrom: blue
5
+ colorTo: green
6
+ sdk: gradio
7
+ sdk_version: "4.0.0"
8
+ app_file: app.py
9
+ pinned: false
10
+ license: apache-2.0
11
+ ---
12
+
13
+ # 🫁 ConvNeXt CheXpert Classifier with GradCAM
14
+
15
+ A web-based chest X-ray analysis tool using ConvNeXt-Base with CBAM attention mechanism. This app provides multi-label classification of 14 thoracic pathologies with GradCAM visualization showing where the model focuses its attention.
16
+
17
+ ## ✨ Features
18
+
19
+ - πŸ” **Multi-label Classification**: Detects 14 different chest conditions
20
+ - πŸ“Š **Confidence Filtering**: Only shows predictions above 60% confidence
21
+ - 🎯 **GradCAM Visualization**: See exactly where the model is looking
22
+ - πŸ–ΌοΈ **Interactive Interface**: Easy-to-use web interface via Gradio
23
+ - πŸ₯ **Research Ready**: Optimized for medical imaging research
24
+
25
+ ## πŸ“‹ Supported Conditions
26
+
27
+ | # | Pathology | # | Pathology |
28
+ |---|---|---|---|
29
+ | 1 | No Finding | 8 | Pneumonia |
30
+ | 2 | Enlarged Cardiomediastinum | 9 | Atelectasis |
31
+ | 3 | Cardiomegaly | 10 | Pneumothorax |
32
+ | 4 | Lung Opacity | 11 | Pleural Effusion |
33
+ | 5 | Lung Lesion | 12 | Pleural Other |
34
+ | 6 | Edema | 13 | Fracture |
35
+ | 7 | Consolidation | 14 | Support Devices |
36
+
37
+ ## πŸš€ Quick Start
38
+
39
+ 1. **Upload**: Click "Upload Chest X-ray" and select a chest X-ray image
40
+ 2. **Analyze**: The model will process the image and show confident predictions
41
+ 3. **Review**: View GradCAM visualizations showing model attention regions
42
+
43
+ ## πŸ“Š How It Works
44
+
45
+ ### Model Architecture
46
+ - **Backbone**: ConvNeXt-Base (modern efficient architecture)
47
+ - **Attention**: CBAM (Convolutional Block Attention Module)
48
+ - **Input**: 384Γ—384 chest X-rays (automatically resized)
49
+ - **Output**: 14 pathology probabilities with sigmoid activation
50
+
51
+ ### GradCAM Visualization
52
+ - **Heatmap**: Shows attention intensity (red = high attention)
53
+ - **Overlay**: Superimposes attention map on original X-ray
54
+ - **Confidence**: Only displays findings above 60% confidence threshold
55
+
56
+ ## πŸ—οΈ Technical Details
57
+
58
+ ### Model Performance
59
+ - **Validation AUC**: 0.81 (multi-label)
60
+ - **Parameters**: ~88M + CBAM attention
61
+ - **Training Data**: CheXpert dataset (224K+ chest X-rays)
62
+ - **Framework**: PyTorch + timm library
63
+
64
+ ### Dependencies
65
+ ```bash
66
+ pip install -r requirements.txt
67
+ ```
68
+
69
+ ## ⚠️ Important Medical Disclaimer
70
+
71
+ **🚨 FOR RESEARCH & EDUCATION ONLY 🚨**
72
+
73
+ ### ❌ DO NOT USE FOR:
74
+ - Clinical diagnosis or treatment decisions
75
+ - Emergency medical situations
76
+ - Replacing professional radiologist review
77
+ - Patient care without expert validation
78
+
79
+ ### ⚠️ Limitations:
80
+ - Not clinically validated or FDA-approved
81
+ - Trained on historical Stanford data (2002-2017)
82
+ - Performance may vary on different populations/equipment
83
+ - Requires qualified radiologist review for any clinical use
84
+
85
+ ### βœ… Appropriate Uses:
86
+ - Academic research and benchmarking
87
+ - Algorithm development and comparison
88
+ - Educational demonstrations
89
+ - Proof-of-concept prototypes
90
+
91
+ **Always consult qualified healthcare professionals for medical decisions.**
92
+
93
+ ## πŸ“š Citation
94
+
95
+ If you use this work in publications, please cite:
96
+
97
+ ```bibtex
98
+ @software{convnext_chexpert_attention_2025,
99
+ author = {Time},
100
+ title = {ConvNeXt-Base CheXpert Classifier with CBAM Attention},
101
+ year = {2025},
102
+ publisher = {HuggingFace},
103
+ url = {https://huggingface.co/spaces/your-username/convnext-chexpert-gradcam}
104
+ }
105
+ ```
106
+
107
+ ## πŸ”— Links
108
+
109
+ - **Original Repository**: [GitHub](https://github.com/jikaan/convnext-chexpert-attention)
110
+ - **CheXpert Dataset**: [Stanford ML Group](https://stanfordmlgroup.github.io/competitions/chexpert/)
111
+ - **Paper**: [CheXpert: A large chest radiograph dataset](https://arxiv.org/abs/1901.07031)
112
+
113
+ ## πŸ“„ License
114
+
115
+ Apache License 2.0 - See [LICENSE](https://github.com/jikaan/convnext-chexpert-attention/blob/main/LICENSE) for details.
116
+
117
+ ---
118
+
119
+ **Created by Time | October 2025**
app.py ADDED
@@ -0,0 +1,326 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ """
2
+ HuggingFace Spaces App for ConvNeXt CheXpert Classification with GradCAM
3
+
4
+ This app provides a web interface for chest X-ray classification with GradCAM visualization
5
+ showing model attention regions for confident predictions (>60% confidence).
6
+
7
+ Usage:
8
+ Run this file and access the Gradio interface via the provided URL
9
+ """
10
+
11
+ import os
12
+ import torch
13
+ import timm
14
+ import gradio as gr
15
+ import numpy as np
16
+ import torch.nn as nn
17
+ import matplotlib.pyplot as plt
18
+ from PIL import Image
19
+ from torchvision import transforms
20
+ import cv2
21
+
22
+ # GradCAM imports
23
+ try:
24
+ from pytorch_grad_cam import GradCAM
25
+ from pytorch_grad_cam.utils.model_targets import ClassifierOutputTarget
26
+ from pytorch_grad_cam.utils.image import show_cam_on_image
27
+ except ImportError:
28
+ print("Installing required packages...")
29
+ os.system("pip install pytorch-grad-cam")
30
+
31
+ # Disease labels in the correct order
32
+ DISEASE_LABELS = [
33
+ "No Finding", "Enlarged Cardiomediastinum", "Cardiomegaly",
34
+ "Lung Opacity", "Lung Lesion", "Edema", "Consolidation",
35
+ "Pneumonia", "Atelectasis", "Pneumothorax", "Pleural Effusion",
36
+ "Pleural Other", "Fracture", "Support Devices"
37
+ ]
38
+
39
+ # Model configuration
40
+ MODEL_CONFIG = {
41
+ "input_size": 384,
42
+ "num_classes": 14,
43
+ "mean": [0.5029414296150208] * 3,
44
+ "std": [0.2892409563064575] * 3
45
+ }
46
+
47
+ class ConvNeXtWithCBAM(nn.Module):
48
+ """ConvNeXt model with CBAM attention for GradCAM compatibility"""
49
+ def __init__(self, num_classes=14, model_name="convnext_base"):
50
+ super().__init__()
51
+ # Create ConvNeXt backbone
52
+ self.backbone = timm.create_model(
53
+ model_name,
54
+ pretrained=False,
55
+ num_classes=0,
56
+ features_only=True
57
+ )
58
+
59
+ # Add CBAM attention
60
+ feature_dim = self.backbone.feature_info.channels()[-1]
61
+ self.cbam = self._create_cbam_attention(feature_dim)
62
+
63
+ # Global pooling and classifier
64
+ self.global_pool = nn.AdaptiveAvgPool2d(1)
65
+ self.classifier = nn.Linear(feature_dim, num_classes)
66
+
67
+ def _create_cbam_attention(self, channels, reduction=16, kernel_size=7):
68
+ """Create CBAM attention module"""
69
+ return nn.Sequential(
70
+ # Channel attention
71
+ nn.AdaptiveAvgPool2d(1),
72
+ nn.Conv2d(channels, channels // reduction, 1, bias=False),
73
+ nn.ReLU(),
74
+ nn.Conv2d(channels // reduction, channels, 1, bias=False),
75
+ nn.Sigmoid(),
76
+ # Spatial attention
77
+ nn.Conv2d(2, 1, kernel_size, padding=kernel_size // 2, bias=False),
78
+ nn.Sigmoid()
79
+ )
80
+
81
+ def forward(self, x):
82
+ # Extract features
83
+ features = self.backbone(x)[-1]
84
+
85
+ # Apply CBAM attention
86
+ ca = self.cbam[:5](features) # Channel attention
87
+ features = features * ca
88
+
89
+ # Spatial attention (simplified for GradCAM)
90
+ avg_out = torch.mean(features, dim=1, keepdim=True)
91
+ max_out, _ = torch.max(features, dim=1, keepdim=True)
92
+ sa = self.cbam[5](torch.cat([avg_out, max_out], dim=1))
93
+ features = features * sa
94
+
95
+ # Global pooling and classification
96
+ features = self.global_pool(features)
97
+ features = features.view(features.size(0), -1)
98
+ return self.classifier(features)
99
+
100
+ def load_model(model_repo="calender/Convnext-Chexpert-Attention"):
101
+ """Load the trained model from HuggingFace Hub"""
102
+ device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
103
+ print(f"Using device: {device}")
104
+
105
+ # Create model
106
+ model = ConvNeXtWithCBAM(num_classes=14).to(device)
107
+
108
+ # Load state dict from HuggingFace Hub
109
+ try:
110
+ from huggingface_hub import hf_hub_download
111
+ model_path = hf_hub_download(repo_id=model_repo, filename="model.pth")
112
+ print(f"Downloaded model from {model_repo}")
113
+ except ImportError:
114
+ print("huggingface_hub not available, trying local model...")
115
+ model_path = "model/model.pth"
116
+
117
+ state_dict = torch.load(model_path, map_location=device)
118
+
119
+ # Handle DataParallel
120
+ if any(key.startswith('module.') for key in state_dict.keys()):
121
+ state_dict = {k.replace('module.', ''): v for k, v in state_dict.items()}
122
+
123
+ model.load_state_dict(state_dict)
124
+ model.eval()
125
+
126
+ print("Model loaded successfully!")
127
+ return model, device
128
+
129
+ def predict_with_gradcam(model, device, image, confidence_threshold=0.6):
130
+ """Get predictions and GradCAM visualizations for confident predictions"""
131
+
132
+ # Image preprocessing
133
+ transform = transforms.Compose([
134
+ transforms.Grayscale(num_output_channels=3), # Convert grayscale to RGB
135
+ transforms.Resize((MODEL_CONFIG["input_size"], MODEL_CONFIG["input_size"])),
136
+ transforms.ToTensor(),
137
+ transforms.Normalize(mean=MODEL_CONFIG["mean"], std=MODEL_CONFIG["std"])
138
+ ])
139
+
140
+ # Prepare input
141
+ input_tensor = transform(image).unsqueeze(0).to(device)
142
+
143
+ # Get predictions
144
+ with torch.no_grad():
145
+ logits = model(input_tensor)
146
+ probabilities = torch.sigmoid(logits).squeeze().cpu().numpy()
147
+
148
+ # Find confident predictions
149
+ confident_indices = []
150
+ confident_predictions = []
151
+
152
+ for idx, (prob, disease) in enumerate(zip(probabilities, DISEASE_LABELS)):
153
+ if prob > confidence_threshold:
154
+ confident_indices.append(idx)
155
+ confident_predictions.append({
156
+ 'disease': disease,
157
+ 'confidence': float(prob),
158
+ 'class_idx': idx
159
+ })
160
+
161
+ if not confident_predictions:
162
+ return {
163
+ 'predictions': [],
164
+ 'message': f'No findings above {confidence_threshold:.0%} confidence threshold',
165
+ 'visualizations': None
166
+ }
167
+
168
+ # Find target layer for GradCAM
169
+ target_layer = None
170
+ for module in reversed(list(model.backbone.modules())):
171
+ if isinstance(module, nn.Conv2d):
172
+ target_layer = module
173
+ break
174
+
175
+ if target_layer is None:
176
+ return {
177
+ 'predictions': confident_predictions,
178
+ 'message': 'Could not find suitable layer for GradCAM',
179
+ 'visualizations': None
180
+ }
181
+
182
+ # Generate GradCAM for each confident prediction
183
+ visualizations = {}
184
+
185
+ for pred in confident_predictions:
186
+ class_idx = pred['class_idx']
187
+ disease = pred['disease']
188
+ confidence = pred['confidence']
189
+
190
+ # Generate GradCAM
191
+ targets = [ClassifierOutputTarget(class_idx)]
192
+
193
+ try:
194
+ with GradCAM(model=model, target_layers=[target_layer]) as cam:
195
+ grayscale_cam = cam(input_tensor=input_tensor, targets=targets)[0, :]
196
+
197
+ # Convert to RGB for visualization
198
+ rgb_img = np.array(image.convert('RGB'), dtype=np.float32) / 255.0
199
+
200
+ # Resize heatmap to match image
201
+ grayscale_cam_resized = cv2.resize(grayscale_cam, (rgb_img.shape[1], rgb_img.shape[0]))
202
+
203
+ # Create overlay
204
+ cam_overlay = show_cam_on_image(
205
+ rgb_img,
206
+ grayscale_cam_resized,
207
+ use_rgb=True,
208
+ image_weight=0.5,
209
+ colormap=cv2.COLORMAP_JET
210
+ )
211
+
212
+ visualizations[disease] = {
213
+ 'heatmap': grayscale_cam_resized,
214
+ 'overlay': cam_overlay,
215
+ 'confidence': confidence
216
+ }
217
+
218
+ except Exception as e:
219
+ print(f"Error generating GradCAM for {disease}: {e}")
220
+ continue
221
+
222
+ return {
223
+ 'predictions': confident_predictions,
224
+ 'message': f'Found {len(confident_predictions)} confident predictions above {confidence_threshold:.0%} threshold',
225
+ 'visualizations': visualizations
226
+ }
227
+
228
+ def create_gradio_interface():
229
+ """Create and configure the Gradio interface"""
230
+ model, device = load_model()
231
+
232
+ def analyze_xray(image):
233
+ """Analyze uploaded X-ray image"""
234
+ if image is None:
235
+ return "Please upload a chest X-ray image", None, None
236
+
237
+ try:
238
+ # Get predictions and GradCAM
239
+ results = predict_with_gradcam(model, device, image)
240
+
241
+ if not results['predictions']:
242
+ return results['message'], None, None
243
+
244
+ # Create prediction text
245
+ prediction_text = f"## Analysis Results\n\n{results['message']}\n\n"
246
+ prediction_text += "### Confident Predictions:\n\n"
247
+
248
+ for pred in results['predictions']:
249
+ prediction_text += f"πŸ” **{pred['disease']}**: {pred['confidence']:.1%}\n"
250
+
251
+ # Create visualization plots
252
+ if results['visualizations']:
253
+ num_plots = len(results['visualizations'])
254
+ fig, axes = plt.subplots(num_plots, 3, figsize=(15, 5 * num_plots))
255
+
256
+ if num_plots == 1:
257
+ axes = axes.reshape(1, -1)
258
+
259
+ for i, (disease, vis_data) in enumerate(results['visualizations'].items()):
260
+ # Original image
261
+ axes[i, 0].imshow(image, cmap='gray')
262
+ axes[i, 0].set_title(f"Original X-ray\n{disease}", fontsize=10)
263
+ axes[i, 0].axis('off')
264
+
265
+ # GradCAM heatmap
266
+ axes[i, 1].imshow(vis_data['heatmap'], cmap='jet')
267
+ axes[i, 1].set_title(f"GradCAM Heatmap\n{vis_data['confidence']:.1%}", fontsize=10)
268
+ axes[i, 1].axis('off')
269
+
270
+ # GradCAM overlay
271
+ axes[i, 2].imshow(vis_data['overlay'])
272
+ axes[i, 2].set_title(f"GradCAM Overlay\n{disease}", fontsize=10)
273
+ axes[i, 2].axis('off')
274
+
275
+ plt.tight_layout()
276
+
277
+ return prediction_text, fig, "βœ… Analysis completed successfully!"
278
+
279
+ return prediction_text, None, "βœ… Analysis completed successfully!"
280
+
281
+ except Exception as e:
282
+ return f"❌ Error analyzing image: {str(e)}", None, "Analysis failed"
283
+
284
+ # Create Gradio interface
285
+ interface = gr.Interface(
286
+ fn=analyze_xray,
287
+ inputs=gr.Image(label="Upload Chest X-ray", type="pil"),
288
+ outputs=[
289
+ gr.Markdown(label="Analysis Results"),
290
+ gr.Plot(label="GradCAM Visualizations"),
291
+ gr.Textbox(label="Status", interactive=False)
292
+ ],
293
+ title="🫁 ConvNeXt CheXpert Classifier with GradCAM",
294
+ description="""
295
+ **Medical AI for Chest X-ray Analysis**
296
+
297
+ This tool uses a ConvNeXt-Base model with CBAM attention to analyze chest X-rays and identify 14 different thoracic pathologies.
298
+
299
+ **Features:**
300
+ - πŸ” Multi-label classification of 14 chest conditions
301
+ - πŸ“Š Shows only confident predictions (>60% confidence)
302
+ - 🎯 GradCAM visualization showing model attention regions
303
+ - πŸ₯ Designed for research and educational purposes
304
+
305
+ **⚠️ Important Medical Disclaimer:**
306
+ This tool is for research and educational purposes only. Always consult qualified healthcare professionals for medical decisions.
307
+
308
+ **Supported Conditions:**
309
+ No Finding, Enlarged Cardiomediastinum, Cardiomegaly, Lung Opacity, Lung Lesion, Edema, Consolidation, Pneumonia, Atelectasis, Pneumothorax, Pleural Effusion, Pleural Other, Fracture, Support Devices
310
+ """,
311
+ theme="default",
312
+ allow_flagging="never"
313
+ )
314
+
315
+ return interface
316
+
317
+ # Main execution
318
+ if __name__ == "__main__":
319
+ print("Starting ConvNeXt CheXpert GradCAM App...")
320
+ interface = create_gradio_interface()
321
+ interface.launch(
322
+ server_name="0.0.0.0",
323
+ server_port=7860,
324
+ share=True,
325
+ show_error=True
326
+ )
requirements.txt ADDED
@@ -0,0 +1,32 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ # Core dependencies for ConvNeXt CheXpert Classification with GradCAM
2
+ torch>=2.0.0
3
+ torchvision>=0.15.0
4
+ torchaudio>=2.0.0
5
+
6
+ # Computer vision and image processing
7
+ timm>=0.9.0
8
+ opencv-python>=4.8.0
9
+ Pillow>=9.0.0
10
+ numpy>=1.24.0
11
+
12
+ # Data science and visualization
13
+ scikit-learn>=1.3.0
14
+ matplotlib>=3.7.0
15
+
16
+ # HuggingFace ecosystem
17
+ datasets>=2.10.0
18
+ huggingface-hub>=0.15.0
19
+
20
+ # Utilities
21
+ tqdm>=4.65.0
22
+
23
+ # Grad-CAM visualization
24
+ pytorch-grad-cam>=1.2.0
25
+
26
+ # HuggingFace Spaces web interface
27
+ gradio>=4.0.0
28
+
29
+ # Optional: Enhanced model training (if needed)
30
+ ema-pytorch>=0.2.0
31
+
32
+