Spaces:
Running
on
Zero
Running
on
Zero
Nithya
commited on
Commit
·
9272247
1
Parent(s):
60db161
updated interface
Browse files
app.py
CHANGED
|
@@ -291,8 +291,8 @@ with gr.Blocks(css=css) as demo:
|
|
| 291 |
gr.Markdown("""
|
| 292 |
## Instructions
|
| 293 |
In this demo you can interact with the model in two ways:
|
| 294 |
-
1. **Call and response**: The model will try to continue the idea that you input. This is similar to 'primed generation' discussed in the paper. The last 4 s of the audio will be considered as a 'prime' for the model to continue. <br><br>
|
| 295 |
-
2. **Melodic reinterpretation**: Akin to the idea of 'coarse pitch conditioning' presented in the paper, you can input a pitch contour and the model will generate audio that is similar to but not exactly the same. <br><br>
|
| 296 |
### Upload an audio file or record your voice to get started!
|
| 297 |
""")
|
| 298 |
gr.Markdown("""
|
|
@@ -309,12 +309,22 @@ with gr.Blocks(css=css) as demo:
|
|
| 309 |
""")
|
| 310 |
model_dropdown = gr.Dropdown(["Diffusion Pitch Generator"], label="Select a model type")
|
| 311 |
task_dropdown = gr.Dropdown(label="Select a task", choices=["Call and Response", "Melodic Reinterpretation"])
|
| 312 |
-
|
| 313 |
with gr.Row(equal_height=True):
|
| 314 |
with gr.Column():
|
| 315 |
audio = gr.Audio(label="Input")
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 316 |
with gr.Column():
|
| 317 |
-
generated_audio = gr.Audio(label="Generated Audio")
|
| 318 |
with gr.Row():
|
| 319 |
with gr.Column():
|
| 320 |
with gr.Accordion("View Pitch Plot"):
|
|
@@ -322,16 +332,7 @@ with gr.Blocks(css=css) as demo:
|
|
| 322 |
with gr.Column():
|
| 323 |
with gr.Accordion("View Pitch Plot"):
|
| 324 |
generated_pitch = gr.Plot(label="Generated Pitch")
|
| 325 |
-
|
| 326 |
-
examples=[
|
| 327 |
-
["examples/ex1.wav"],
|
| 328 |
-
["examples/ex2.wav"],
|
| 329 |
-
["examples/ex3.wav"],
|
| 330 |
-
["examples/ex4.wav"],
|
| 331 |
-
["examples/ex5.wav"]
|
| 332 |
-
],
|
| 333 |
-
inputs=audio
|
| 334 |
-
)
|
| 335 |
sbmt.click(container_generate, inputs=[model_dropdown, task_dropdown, audio], outputs=[generated_audio, user_input, generated_pitch])
|
| 336 |
|
| 337 |
def main(argv):
|
|
|
|
| 291 |
gr.Markdown("""
|
| 292 |
## Instructions
|
| 293 |
In this demo you can interact with the model in two ways:
|
| 294 |
+
1. **[Call and response](https://snnithya.github.io/gamadhani-samples/5primed_generation/)**: The model will try to continue the idea that you input. This is similar to 'primed generation' discussed in the paper. The last 4 s of the audio will be considered as a 'prime' for the model to continue. <br><br>
|
| 295 |
+
2. **[Melodic reinterpretation](https://snnithya.github.io/gamadhani-samples/6coarsepitch/)**: Akin to the idea of 'coarse pitch conditioning' presented in the paper, you can input a pitch contour and the model will generate audio that is similar to but not exactly the same. <br><br>
|
| 296 |
### Upload an audio file or record your voice to get started!
|
| 297 |
""")
|
| 298 |
gr.Markdown("""
|
|
|
|
| 309 |
""")
|
| 310 |
model_dropdown = gr.Dropdown(["Diffusion Pitch Generator"], label="Select a model type")
|
| 311 |
task_dropdown = gr.Dropdown(label="Select a task", choices=["Call and Response", "Melodic Reinterpretation"])
|
| 312 |
+
|
| 313 |
with gr.Row(equal_height=True):
|
| 314 |
with gr.Column():
|
| 315 |
audio = gr.Audio(label="Input")
|
| 316 |
+
examples = gr.Examples(
|
| 317 |
+
examples=[
|
| 318 |
+
["examples/ex1.wav"],
|
| 319 |
+
["examples/ex2.wav"],
|
| 320 |
+
["examples/ex3.wav"],
|
| 321 |
+
["examples/ex4.wav"],
|
| 322 |
+
["examples/ex5.wav"]
|
| 323 |
+
],
|
| 324 |
+
inputs=audio
|
| 325 |
+
)
|
| 326 |
with gr.Column():
|
| 327 |
+
generated_audio = gr.Audio(label="Generated Audio", elem_id="audio")
|
| 328 |
with gr.Row():
|
| 329 |
with gr.Column():
|
| 330 |
with gr.Accordion("View Pitch Plot"):
|
|
|
|
| 332 |
with gr.Column():
|
| 333 |
with gr.Accordion("View Pitch Plot"):
|
| 334 |
generated_pitch = gr.Plot(label="Generated Pitch")
|
| 335 |
+
sbmt = gr.Button()
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 336 |
sbmt.click(container_generate, inputs=[model_dropdown, task_dropdown, audio], outputs=[generated_audio, user_input, generated_pitch])
|
| 337 |
|
| 338 |
def main(argv):
|