kshitijthakkar commited on
Commit
0f82736
Β·
1 Parent(s): 7573aeb

docs: Enhance documentation screen with logos, badges, TOC, and accordions

Browse files

- Fix genai_otel_instrument usage examples (use genai_otel.instrument() instead of instrument_llm)
- Add logos for all 4 projects (About, TraceVerde, SMOLTRACE, MCP Server)
- Add comprehensive badges from project READMEs
- About: TraceMind logo
- TraceVerde: PyPI, GitHub, OpenTelemetry, Downloads badges
- SMOLTRACE: GitHub, PyPI, License badges
- MCP Server: MCP Hackathon, Track 1, HF Space, Google Gemini badges
- Add table of contents to all tabs for easy navigation
- Wrap major sections in accordions for better UX
- Open by default: Quick Start, Architecture, Key Features
- Collapsed: What Gets Captured, What Gets Generated, MCP Tools Provided
- Remove incorrect SMOLTRACE examples link (folder doesn't exist)

Improves documentation readability and navigation across all tabs.

Files changed (1) hide show
  1. screens/documentation.py +215 -45
screens/documentation.py CHANGED
@@ -11,13 +11,41 @@ def create_about_tab():
11
  return gr.Markdown("""
12
  # 🧠 TraceMind Ecosystem
13
 
 
 
 
 
 
 
14
  **The Complete AI Agent Evaluation Platform**
15
 
 
 
 
 
 
 
 
16
  TraceMind is a comprehensive ecosystem for evaluating, monitoring, and optimizing AI agents. Built on open-source foundations and powered by the Model Context Protocol (MCP), TraceMind provides everything you need for production-grade agent evaluation.
17
 
18
  ---
19
 
20
- ## πŸ—οΈ Architecture Overview
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
21
 
22
  The TraceMind ecosystem consists of four integrated components:
23
 
@@ -45,16 +73,19 @@ The TraceMind ecosystem consists of four integrated components:
45
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
46
  ```
47
 
 
 
48
  ---
49
 
50
- ## πŸ”„ The Complete Flow
 
51
 
52
  ### 1. **Instrument Your Agents** (TraceVerde)
53
  ```python
54
- from genai_otel_instrument import instrument_llm
55
 
56
  # Zero-code instrumentation
57
- instrument_llm(enable_content_capture=True)
58
 
59
  # Your agent code runs normally, but now traced!
60
  agent.run("What's the weather in Tokyo?")
@@ -75,9 +106,12 @@ smoltrace-eval \\
75
  - Explore detailed traces
76
  - Ask questions with MCP-powered chat
77
 
 
 
78
  ---
79
 
80
- ## 🎯 Key Features
 
81
 
82
  ### For Developers
83
  - βœ… **Zero-code Instrumentation**: Just import and go
@@ -97,6 +131,8 @@ smoltrace-eval \\
97
  - βœ… **MCP Integration**: Connect to intelligent analysis tools
98
  - βœ… **HuggingFace Native**: Seamless dataset integration
99
 
 
 
100
  ---
101
 
102
  ## πŸ† Built for MCP's 1st Birthday Hackathon
@@ -133,7 +169,8 @@ Use the tabs above to explore detailed documentation for each component:
133
 
134
  ---
135
 
136
- ## πŸ’‘ Getting Started
 
137
 
138
  ### Quick Start (5 minutes)
139
  ```bash
@@ -155,6 +192,8 @@ smoltrace-eval --model openai/gpt-4 --agent-type tool
155
  - Explore the **Leaderboard** to see real evaluation data
156
  - Check the **Trace Detail** screen for deep inspection
157
 
 
 
158
  ---
159
 
160
  ## 🀝 Contributing
@@ -189,10 +228,46 @@ def create_traceverde_tab():
189
  return gr.Markdown("""
190
  # πŸ”­ TraceVerde (genai_otel_instrument)
191
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
192
  **Automatic OpenTelemetry Instrumentation for LLM Applications**
193
 
194
- [![GitHub](https://img.shields.io/badge/GitHub-genai__otel__instrument-black?logo=github)](https://github.com/Mandark-droid/genai_otel_instrument)
195
- [![PyPI](https://img.shields.io/badge/PyPI-genai--otel--instrument-blue?logo=pypi)](https://pypi.org/project/genai-otel-instrument)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
196
 
197
  ---
198
 
@@ -227,12 +302,39 @@ pip install genai-otel-instrument[all]
227
 
228
  ---
229
 
230
- ## πŸš€ Quick Start
 
231
 
232
  ### Basic Usage
233
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
234
  ```python
235
- from genai_otel_instrument import instrument_llm
236
  from opentelemetry import trace
237
  from opentelemetry.sdk.trace import TracerProvider
238
  from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
@@ -243,7 +345,8 @@ span_processor = SimpleSpanProcessor(ConsoleSpanExporter())
243
  trace.get_tracer_provider().add_span_processor(span_processor)
244
 
245
  # 2. Instrument all LLM frameworks (one line!)
246
- instrument_llm(enable_content_capture=True)
 
247
 
248
  # 3. Use your LLM framework normally - it's now traced!
249
  from litellm import completion
@@ -256,6 +359,8 @@ response = completion(
256
  # Traces are automatically captured and exported!
257
  ```
258
 
 
 
259
  ---
260
 
261
  ## 🎯 Supported Frameworks
@@ -275,7 +380,8 @@ TraceVerde automatically instruments:
275
 
276
  ---
277
 
278
- ## πŸ“Š What Gets Captured?
 
279
 
280
  ### LLM Spans
281
 
@@ -336,6 +442,8 @@ When enabled, captures real-time GPU data:
336
  }
337
  ```
338
 
 
 
339
  ---
340
 
341
  ## 🌱 CO2 Emissions Tracking
@@ -343,13 +451,10 @@ When enabled, captures real-time GPU data:
343
  TraceVerde integrates with CodeCarbon for sustainability monitoring:
344
 
345
  ```python
346
- from genai_otel_instrument import instrument_llm
347
 
348
  # Enable CO2 tracking
349
- instrument_llm(
350
- enable_content_capture=True,
351
- enable_carbon_tracking=True
352
- )
353
 
354
  # Your LLM calls now track carbon emissions!
355
  ```
@@ -375,25 +480,16 @@ otlp_exporter = OTLPSpanExporter(endpoint="http://localhost:4317")
375
  span_processor = BatchSpanProcessor(otlp_exporter)
376
  trace.get_tracer_provider().add_span_processor(span_processor)
377
 
378
- instrument_llm(enable_content_capture=True)
379
- ```
380
-
381
- ### Content Capture Control
382
-
383
- ```python
384
- # Capture full prompts and responses (default: True)
385
- instrument_llm(enable_content_capture=True)
386
-
387
- # Disable for privacy/compliance
388
- instrument_llm(enable_content_capture=False)
389
  ```
390
 
391
  ### GPU Metrics
392
 
393
  ```python
394
  # Enable GPU monitoring (requires pynvml)
395
- instrument_llm(
396
- enable_content_capture=True,
397
  enable_gpu_metrics=True,
398
  gpu_poll_interval=1.0 # seconds
399
  )
@@ -423,7 +519,8 @@ results = evaluate_agent(
423
  ### 1. Development & Debugging
424
  ```python
425
  # See exactly what your agent is doing
426
- instrument_llm(enable_content_capture=True)
 
427
 
428
  # Run your agent
429
  agent.run("Complex task")
@@ -439,13 +536,15 @@ from opentelemetry.exporter.otlp.proto.grpc.trace_exporter import OTLPSpanExport
439
  otlp_exporter = OTLPSpanExporter(endpoint="https://your-otel-collector")
440
  # ... setup processor ...
441
 
442
- instrument_llm(enable_content_capture=False) # Privacy mode
 
443
  ```
444
 
445
  ### 3. Cost Analysis
446
  ```python
447
  # Track costs across all LLM calls
448
- instrument_llm(enable_content_capture=True)
 
449
 
450
  # Analyze cost per user/session/feature
451
  # All costs automatically captured in span attributes
@@ -454,7 +553,8 @@ instrument_llm(enable_content_capture=True)
454
  ### 4. Sustainability Reporting
455
  ```python
456
  # Monitor environmental impact
457
- instrument_llm(
 
458
  enable_carbon_tracking=True,
459
  enable_gpu_metrics=True
460
  )
@@ -505,10 +605,11 @@ pip install genai-otel-instrument[gpu]
505
  nvidia-smi
506
  ```
507
 
508
- **Q: Content capture not working?**
509
  ```python
510
- # Explicitly enable content capture
511
- instrument_llm(enable_content_capture=True)
 
512
  ```
513
 
514
  ---
@@ -539,8 +640,35 @@ def create_smoltrace_tab():
539
 
540
  **Lightweight Agent Evaluation Engine with Built-in OpenTelemetry Tracing**
541
 
542
- [![GitHub](https://img.shields.io/badge/GitHub-SMOLTRACE-black?logo=github)](https://github.com/Mandark-droid/SMOLTRACE)
543
- [![PyPI](https://img.shields.io/badge/PyPI-smoltrace-blue?logo=pypi)](https://pypi.org/project/smoltrace/)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
544
 
545
  ---
546
 
@@ -576,7 +704,8 @@ pip install smoltrace[all]
576
 
577
  ---
578
 
579
- ## πŸš€ Quick Start
 
580
 
581
  ### Command Line
582
 
@@ -624,6 +753,8 @@ results.upload_to_hf(
624
  )
625
  ```
626
 
 
 
627
  ---
628
 
629
  ## 🎯 Evaluation Types
@@ -660,7 +791,8 @@ smoltrace-eval --model gpt-4 --agent-type both
660
 
661
  ---
662
 
663
- ## πŸ“Š What Gets Generated?
 
664
 
665
  SMOLTRACE creates **4 structured datasets** on HuggingFace:
666
 
@@ -771,6 +903,8 @@ GPU metrics and performance data:
771
  }
772
  ```
773
 
 
 
774
  ---
775
 
776
  ## πŸ”§ Configuration Options
@@ -977,8 +1111,7 @@ print(f"Estimated time: {gpu_cost.duration_minutes} minutes")
977
 
978
  - **GitHub**: [github.com/Mandark-droid/SMOLTRACE](https://github.com/Mandark-droid/SMOLTRACE)
979
  - **PyPI**: [pypi.org/project/smoltrace](https://pypi.org/project/smoltrace/)
980
- - **Examples**: [github.com/Mandark-droid/SMOLTRACE/examples](https://github.com/Mandark-droid/SMOLTRACE/tree/main/examples)
981
- - **Dataset Schema**: [github.com/Mandark-droid/SMOLTRACE/docs/schema.md](https://github.com/Mandark-droid/SMOLTRACE/blob/main/docs/schema.md)
982
 
983
  ---
984
 
@@ -1036,10 +1169,44 @@ def create_mcp_server_tab():
1036
  return gr.Markdown("""
1037
  # πŸ”Œ TraceMind-MCP-Server
1038
 
 
 
 
 
 
 
1039
  **Building MCP: Intelligent Analysis Tools for Agent Evaluation**
1040
 
1041
- [![HF Space](https://img.shields.io/badge/HuggingFace-TraceMind--MCP--Server-yellow?logo=huggingface)](https://huggingface.co/spaces/MCP-1st-Birthday/TraceMind-mcp-server)
1042
  [![Track 1](https://img.shields.io/badge/Track-Building%20MCP%20(Enterprise)-blue)](https://github.com/modelcontextprotocol/hackathon)
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1043
 
1044
  ---
1045
 
@@ -1056,7 +1223,8 @@ TraceMind-MCP-Server is a **Track 1 (Building MCP)** submission that provides MC
1056
 
1057
  ---
1058
 
1059
- ## πŸ› οΈ MCP Tools Provided
 
1060
 
1061
  ### 1. `analyze_leaderboard`
1062
 
@@ -1268,6 +1436,8 @@ Consider hybrid approach: Llama for routine tasks, GPT-4 for complex ones.
1268
  3. Identifies patterns
1269
  4. Suggests optimizations
1270
 
 
 
1271
  ---
1272
 
1273
  ## 🌐 Accessing the MCP Server
 
11
  return gr.Markdown("""
12
  # 🧠 TraceMind Ecosystem
13
 
14
+ <div align="center">
15
+ <img src="https://raw.githubusercontent.com/Mandark-droid/TraceMind-AI/assets/Logo.png" alt="TraceMind Logo" width="300"/>
16
+ </div>
17
+
18
+ <br/>
19
+
20
  **The Complete AI Agent Evaluation Platform**
21
 
22
+ [![MCP's 1st Birthday Hackathon](https://img.shields.io/badge/MCP%27s%201st%20Birthday-Hackathon-blue)](https://github.com/modelcontextprotocol)
23
+ [![Track 2](https://img.shields.io/badge/Track-MCP%20in%20Action%20(Enterprise)-purple)](https://github.com/modelcontextprotocol/hackathon)
24
+ [![Powered by Gradio](https://img.shields.io/badge/Powered%20by-Gradio-orange)](https://gradio.app/)
25
+
26
+ > **🎯 Track 2 Submission**: MCP in Action (Enterprise)
27
+ > **πŸ“… MCP's 1st Birthday Hackathon**: November 14-30, 2025
28
+
29
  TraceMind is a comprehensive ecosystem for evaluating, monitoring, and optimizing AI agents. Built on open-source foundations and powered by the Model Context Protocol (MCP), TraceMind provides everything you need for production-grade agent evaluation.
30
 
31
  ---
32
 
33
+ ## πŸ“– Table of Contents
34
+
35
+ - [Architecture Overview](#️-architecture-overview)
36
+ - [The Complete Flow](#-the-complete-flow)
37
+ - [Key Features](#-key-features)
38
+ - [Built for MCP's 1st Birthday Hackathon](#-built-for-mcps-1st-birthday-hackathon)
39
+ - [Quick Links](#-quick-links)
40
+ - [Documentation Navigation](#-documentation-navigation)
41
+ - [Getting Started](#-getting-started)
42
+ - [Contributing](#-contributing)
43
+ - [Acknowledgments](#-acknowledgments)
44
+
45
+ ---
46
+
47
+ <details open>
48
+ <summary><h2>πŸ—οΈ Architecture Overview</h2></summary>
49
 
50
  The TraceMind ecosystem consists of four integrated components:
51
 
 
73
  β””β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”€β”˜
74
  ```
75
 
76
+ </details>
77
+
78
  ---
79
 
80
+ <details open>
81
+ <summary><h2>πŸ”„ The Complete Flow</h2></summary>
82
 
83
  ### 1. **Instrument Your Agents** (TraceVerde)
84
  ```python
85
+ import genai_otel
86
 
87
  # Zero-code instrumentation
88
+ genai_otel.instrument()
89
 
90
  # Your agent code runs normally, but now traced!
91
  agent.run("What's the weather in Tokyo?")
 
106
  - Explore detailed traces
107
  - Ask questions with MCP-powered chat
108
 
109
+ </details>
110
+
111
  ---
112
 
113
+ <details open>
114
+ <summary><h2>🎯 Key Features</h2></summary>
115
 
116
  ### For Developers
117
  - βœ… **Zero-code Instrumentation**: Just import and go
 
131
  - βœ… **MCP Integration**: Connect to intelligent analysis tools
132
  - βœ… **HuggingFace Native**: Seamless dataset integration
133
 
134
+ </details>
135
+
136
  ---
137
 
138
  ## πŸ† Built for MCP's 1st Birthday Hackathon
 
169
 
170
  ---
171
 
172
+ <details open>
173
+ <summary><h2>πŸ’‘ Getting Started</h2></summary>
174
 
175
  ### Quick Start (5 minutes)
176
  ```bash
 
192
  - Explore the **Leaderboard** to see real evaluation data
193
  - Check the **Trace Detail** screen for deep inspection
194
 
195
+ </details>
196
+
197
  ---
198
 
199
  ## 🀝 Contributing
 
228
  return gr.Markdown("""
229
  # πŸ”­ TraceVerde (genai_otel_instrument)
230
 
231
+ <div align="center">
232
+ <img src="https://raw.githubusercontent.com/Mandark-droid/genai_otel_instrument/main/.github/images/Logo.jpg" alt="TraceVerde Logo" width="400"/>
233
+ </div>
234
+
235
+ <br/>
236
+
237
+ [![PyPI version](https://badge.fury.io/py/genai-otel-instrument.svg)](https://badge.fury.io/py/genai-otel-instrument)
238
+ [![Python Versions](https://img.shields.io/pypi/pyversions/genai-otel-instrument.svg)](https://pypi.org/project/genai-otel-instrument/)
239
+ [![License](https://img.shields.io/badge/License-AGPL%203.0-blue.svg)](https://www.gnu.org/licenses/agpl-3.0)
240
+ [![Downloads](https://static.pepy.tech/badge/genai-otel-instrument)](https://pepy.tech/project/genai-otel-instrument)
241
+ [![Downloads/Month](https://static.pepy.tech/badge/genai-otel-instrument/month)](https://pepy.tech/project/genai-otel-instrument)
242
+
243
+ [![GitHub Stars](https://img.shields.io/github/stars/Mandark-droid/genai_otel_instrument?style=social)](https://github.com/Mandark-droid/genai_otel_instrument)
244
+ [![GitHub Forks](https://img.shields.io/github/forks/Mandark-droid/genai_otel_instrument?style=social)](https://github.com/Mandark-droid/genai_otel_instrument)
245
+ [![GitHub Issues](https://img.shields.io/github/issues/Mandark-droid/genai_otel_instrument)](https://github.com/Mandark-droid/genai_otel_instrument/issues)
246
+
247
+ [![OpenTelemetry](https://img.shields.io/badge/OpenTelemetry-1.20%2B-blueviolet)](https://opentelemetry.io/)
248
+ [![Semantic Conventions](https://img.shields.io/badge/OTel%20Semconv-GenAI%20v1.28-orange)](https://opentelemetry.io/docs/specs/semconv/gen-ai/)
249
+ [![Code Style: Black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
250
+
251
  **Automatic OpenTelemetry Instrumentation for LLM Applications**
252
 
253
+ ---
254
+
255
+ ## πŸ“– Table of Contents
256
+
257
+ - [What is TraceVerde?](#what-is-traceverde)
258
+ - [Installation](#-installation)
259
+ - [Quick Start](#-quick-start)
260
+ - [Supported Frameworks](#-supported-frameworks)
261
+ - [What Gets Captured?](#-what-gets-captured)
262
+ - [CO2 Emissions Tracking](#-co2-emissions-tracking)
263
+ - [Advanced Configuration](#-advanced-configuration)
264
+ - [Integration with SMOLTRACE](#-integration-with-smoltrace)
265
+ - [Use Cases](#-use-cases)
266
+ - [OpenTelemetry Standards](#-opentelemetry-standards)
267
+ - [Resources](#-resources)
268
+ - [Troubleshooting](#-troubleshooting)
269
+ - [License](#-license)
270
+ - [Contributing](#-contributing)
271
 
272
  ---
273
 
 
302
 
303
  ---
304
 
305
+ <details open>
306
+ <summary><h2>πŸš€ Quick Start</h2></summary>
307
 
308
  ### Basic Usage
309
 
310
+ **Option 1: Environment Variables (No code changes)**
311
+
312
+ ```bash
313
+ export OTEL_SERVICE_NAME=my-llm-app
314
+ export OTEL_EXPORTER_OTLP_ENDPOINT=http://localhost:4318
315
+ python your_app.py
316
+ ```
317
+
318
+ **Option 2: One line of code**
319
+
320
+ ```python
321
+ import genai_otel
322
+ genai_otel.instrument()
323
+
324
+ # Your existing code works unchanged
325
+ import openai
326
+ client = openai.OpenAI()
327
+ response = client.chat.completions.create(
328
+ model="gpt-4",
329
+ messages=[{"role": "user", "content": "Hello!"}]
330
+ )
331
+
332
+ # Traces are automatically captured and exported!
333
+ ```
334
+
335
+ **Option 3: With OpenTelemetry Setup**
336
+
337
  ```python
 
338
  from opentelemetry import trace
339
  from opentelemetry.sdk.trace import TracerProvider
340
  from opentelemetry.sdk.trace.export import ConsoleSpanExporter, SimpleSpanProcessor
 
345
  trace.get_tracer_provider().add_span_processor(span_processor)
346
 
347
  # 2. Instrument all LLM frameworks (one line!)
348
+ import genai_otel
349
+ genai_otel.instrument()
350
 
351
  # 3. Use your LLM framework normally - it's now traced!
352
  from litellm import completion
 
359
  # Traces are automatically captured and exported!
360
  ```
361
 
362
+ </details>
363
+
364
  ---
365
 
366
  ## 🎯 Supported Frameworks
 
380
 
381
  ---
382
 
383
+ <details>
384
+ <summary><h2>πŸ“Š What Gets Captured?</h2></summary>
385
 
386
  ### LLM Spans
387
 
 
442
  }
443
  ```
444
 
445
+ </details>
446
+
447
  ---
448
 
449
  ## 🌱 CO2 Emissions Tracking
 
451
  TraceVerde integrates with CodeCarbon for sustainability monitoring:
452
 
453
  ```python
454
+ import genai_otel
455
 
456
  # Enable CO2 tracking
457
+ genai_otel.instrument(enable_carbon_tracking=True)
 
 
 
458
 
459
  # Your LLM calls now track carbon emissions!
460
  ```
 
480
  span_processor = BatchSpanProcessor(otlp_exporter)
481
  trace.get_tracer_provider().add_span_processor(span_processor)
482
 
483
+ import genai_otel
484
+ genai_otel.instrument()
 
 
 
 
 
 
 
 
 
485
  ```
486
 
487
  ### GPU Metrics
488
 
489
  ```python
490
  # Enable GPU monitoring (requires pynvml)
491
+ import genai_otel
492
+ genai_otel.instrument(
493
  enable_gpu_metrics=True,
494
  gpu_poll_interval=1.0 # seconds
495
  )
 
519
  ### 1. Development & Debugging
520
  ```python
521
  # See exactly what your agent is doing
522
+ import genai_otel
523
+ genai_otel.instrument()
524
 
525
  # Run your agent
526
  agent.run("Complex task")
 
536
  otlp_exporter = OTLPSpanExporter(endpoint="https://your-otel-collector")
537
  # ... setup processor ...
538
 
539
+ import genai_otel
540
+ genai_otel.instrument()
541
  ```
542
 
543
  ### 3. Cost Analysis
544
  ```python
545
  # Track costs across all LLM calls
546
+ import genai_otel
547
+ genai_otel.instrument()
548
 
549
  # Analyze cost per user/session/feature
550
  # All costs automatically captured in span attributes
 
553
  ### 4. Sustainability Reporting
554
  ```python
555
  # Monitor environmental impact
556
+ import genai_otel
557
+ genai_otel.instrument(
558
  enable_carbon_tracking=True,
559
  enable_gpu_metrics=True
560
  )
 
605
  nvidia-smi
606
  ```
607
 
608
+ **Q: How to configure different options?**
609
  ```python
610
+ # Use environment variables or pass options to instrument()
611
+ import genai_otel
612
+ genai_otel.instrument(enable_gpu_metrics=True)
613
  ```
614
 
615
  ---
 
640
 
641
  **Lightweight Agent Evaluation Engine with Built-in OpenTelemetry Tracing**
642
 
643
+ [![Python](https://img.shields.io/badge/Python-3.10%2B-blue)](https://www.python.org/downloads/)
644
+ [![License](https://img.shields.io/badge/License-AGPL--3.0-blue.svg)](https://github.com/Mandark-droid/SMOLTRACE/blob/main/LICENSE)
645
+ [![PyPI version](https://badge.fury.io/py/smoltrace.svg)](https://badge.fury.io/py/smoltrace)
646
+ [![Downloads](https://static.pepy.tech/badge/smoltrace)](https://pepy.tech/project/smoltrace)
647
+ [![Downloads/Month](https://static.pepy.tech/badge/smoltrace/month)](https://pepy.tech/project/smoltrace)
648
+ [![Code style: black](https://img.shields.io/badge/code%20style-black-000000.svg)](https://github.com/psf/black)
649
+ [![Imports: isort](https://img.shields.io/badge/%20imports-isort-%231674b1?style=flat&labelColor=ef8336)](https://pycqa.github.io/isort/)
650
+ [![Tests](https://img.shields.io/github/actions/workflow/status/Mandark-droid/SMOLTRACE/test.yml?branch=main&label=tests)](https://github.com/Mandark-droid/SMOLTRACE/actions?query=workflow%3Atest)
651
+ [![Docs](https://img.shields.io/badge/docs-stable-blue.svg)](https://huggingface.co/docs/smoltrace/en/index)
652
+
653
+ ---
654
+
655
+ ## πŸ“– Table of Contents
656
+
657
+ - [What is SMOLTRACE?](#what-is-smoltrace)
658
+ - [Installation](#-installation)
659
+ - [Quick Start](#-quick-start)
660
+ - [Evaluation Types](#-evaluation-types)
661
+ - [What Gets Generated?](#-what-gets-generated)
662
+ - [Configuration Options](#-configuration-options)
663
+ - [Integration with HuggingFace Jobs](#️-integration-with-huggingface-jobs)
664
+ - [Integration with TraceMind-AI](#-integration-with-tracemind-ai)
665
+ - [Best Practices](#-best-practices)
666
+ - [Cost Estimation](#-cost-estimation)
667
+ - [Architecture](#-architecture)
668
+ - [Resources](#-resources)
669
+ - [Troubleshooting](#-troubleshooting)
670
+ - [License](#-license)
671
+ - [Contributing](#-contributing)
672
 
673
  ---
674
 
 
704
 
705
  ---
706
 
707
+ <details open>
708
+ <summary><h2>πŸš€ Quick Start</h2></summary>
709
 
710
  ### Command Line
711
 
 
753
  )
754
  ```
755
 
756
+ </details>
757
+
758
  ---
759
 
760
  ## 🎯 Evaluation Types
 
791
 
792
  ---
793
 
794
+ <details>
795
+ <summary><h2>πŸ“Š What Gets Generated?</h2></summary>
796
 
797
  SMOLTRACE creates **4 structured datasets** on HuggingFace:
798
 
 
903
  }
904
  ```
905
 
906
+ </details>
907
+
908
  ---
909
 
910
  ## πŸ”§ Configuration Options
 
1111
 
1112
  - **GitHub**: [github.com/Mandark-droid/SMOLTRACE](https://github.com/Mandark-droid/SMOLTRACE)
1113
  - **PyPI**: [pypi.org/project/smoltrace](https://pypi.org/project/smoltrace/)
1114
+ - **Documentation**: [SMOLTRACE README](https://github.com/Mandark-droid/SMOLTRACE#readme)
 
1115
 
1116
  ---
1117
 
 
1169
  return gr.Markdown("""
1170
  # πŸ”Œ TraceMind-MCP-Server
1171
 
1172
+ <div align="center">
1173
+ <img src="https://raw.githubusercontent.com/Mandark-droid/TraceMind-mcp-server/assets/Logo.png" alt="TraceMind MCP Server Logo" width="300"/>
1174
+ </div>
1175
+
1176
+ <br/>
1177
+
1178
  **Building MCP: Intelligent Analysis Tools for Agent Evaluation**
1179
 
1180
+ [![MCP's 1st Birthday Hackathon](https://img.shields.io/badge/MCP%27s%201st%20Birthday-Hackathon-blue)](https://github.com/modelcontextprotocol)
1181
  [![Track 1](https://img.shields.io/badge/Track-Building%20MCP%20(Enterprise)-blue)](https://github.com/modelcontextprotocol/hackathon)
1182
+ [![HF Space](https://img.shields.io/badge/HuggingFace-TraceMind--MCP--Server-yellow?logo=huggingface)](https://huggingface.co/spaces/MCP-1st-Birthday/TraceMind-mcp-server)
1183
+ [![Google Gemini](https://img.shields.io/badge/Powered%20by-Google%20Gemini%202.5%20Pro-orange)](https://ai.google.dev/)
1184
+
1185
+ > **🎯 Track 1 Submission**: Building MCP (Enterprise)
1186
+ > **πŸ“… MCP's 1st Birthday Hackathon**: November 14-30, 2025
1187
+
1188
+ ---
1189
+
1190
+ ## πŸ“– Table of Contents
1191
+
1192
+ - [What is TraceMind-MCP-Server?](#what-is-tracemind-mcp-server)
1193
+ - [MCP Tools Provided](#️-mcp-tools-provided)
1194
+ - [analyze_leaderboard](#1-analyze_leaderboard)
1195
+ - [estimate_cost](#2-estimate_cost)
1196
+ - [debug_trace](#3-debug_trace)
1197
+ - [compare_runs](#4-compare_runs)
1198
+ - [analyze_results](#5-analyze_results)
1199
+ - [Accessing the MCP Server](#-accessing-the-mcp-server)
1200
+ - [Use Cases](#-use-cases)
1201
+ - [Architecture](#️-architecture)
1202
+ - [Configuration](#-configuration)
1203
+ - [Dataset Requirements](#-dataset-requirements)
1204
+ - [Learning Resources](#-learning-resources)
1205
+ - [Troubleshooting](#-troubleshooting)
1206
+ - [Links](#-links)
1207
+ - [License](#-license)
1208
+ - [Contributing](#-contributing)
1209
+ - [MCP's 1st Birthday Hackathon](#-mcps-1st-birthday-hackathon)
1210
 
1211
  ---
1212
 
 
1223
 
1224
  ---
1225
 
1226
+ <details>
1227
+ <summary><h2>πŸ› οΈ MCP Tools Provided</h2></summary>
1228
 
1229
  ### 1. `analyze_leaderboard`
1230
 
 
1436
  3. Identifies patterns
1437
  4. Suggests optimizations
1438
 
1439
+ </details>
1440
+
1441
  ---
1442
 
1443
  ## 🌐 Accessing the MCP Server