Datasets:

Languages:
English
ArXiv:
License:
Dataset Preview
Duplicate
The full dataset viewer is not available (click to read why). Only showing a preview of the rows.
The dataset generation failed because of a cast error
Error code:   DatasetGenerationCastError
Exception:    DatasetGenerationCastError
Message:      An error occurred while generating the dataset

All the data files must have the same columns, but at some point there are 3 new columns ({'state', 'dialogue_acts', 'intents'}) and 5 missing columns ({'original_id', 'dataset', 'turns', 'dialogue_id', 'data_split'}).

This happened while the json dataset builder was generating data using

zip://data/ontology.json::/tmp/hf-datasets-cache/medium/datasets/73265727319914-config-parquet-and-info-ConvLab-tm2-6af003ce/downloads/48bf4295eec3e2c77555430f4552c0e16b91f5ae5db39c6f7d3b11c83437039b

Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)
Traceback:    Traceback (most recent call last):
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2011, in _prepare_split_single
                  writer.write_table(table)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/arrow_writer.py", line 585, in write_table
                  pa_table = table_cast(pa_table, self._schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2302, in table_cast
                  return cast_table_to_schema(table, schema)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/table.py", line 2256, in cast_table_to_schema
                  raise CastError(
              datasets.table.CastError: Couldn't cast
              domains: struct<flights: struct<description: string, slots: struct<type: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, destination1: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, destination2: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, origin: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, date.depart_origin: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, date.depart_intermediate: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, date.return: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, time_of_day: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, seating_class: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, seat_location: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, stops: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, price_range: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, num.pax: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, luggage: struct<description: string, is_categorical: bool, possible_values: list<item: null>>, total_fare: struct<description: string, is_cate
              ...
              : string
                    child 15, phone: string
                child 6, sports: struct<name.team: string, record.team: string, record.games_ahead: string, record.games_back: string, place.team: string, result.match: string, score.match: string, date.match: string, day.match: string, time.match: string, name.player: string, position.player: string, record.player: string, name.non_player: string, venue: string, other_description.person: string, other_description.team: string, other_description.match: string>
                    child 0, name.team: string
                    child 1, record.team: string
                    child 2, record.games_ahead: string
                    child 3, record.games_back: string
                    child 4, place.team: string
                    child 5, result.match: string
                    child 6, score.match: string
                    child 7, date.match: string
                    child 8, day.match: string
                    child 9, time.match: string
                    child 10, name.player: string
                    child 11, position.player: string
                    child 12, record.player: string
                    child 13, name.non_player: string
                    child 14, venue: string
                    child 15, other_description.person: string
                    child 16, other_description.team: string
                    child 17, other_description.match: string
              dialogue_acts: struct<categorical: list<item: null>, non-categorical: list<item: string>, binary: list<item: string>>
                child 0, categorical: list<item: null>
                    child 0, item: null
                child 1, non-categorical: list<item: string>
                    child 0, item: string
                child 2, binary: list<item: string>
                    child 0, item: string
              to
              {'domains': Sequence(feature=Value(dtype='string', id=None), length=-1, id=None), 'original_id': Value(dtype='string', id=None), 'dataset': Value(dtype='string', id=None), 'turns': [{'dialogue_acts': {'binary': [{'domain': Value(dtype='string', id=None), 'intent': Value(dtype='string', id=None), 'slot': Value(dtype='string', id=None)}], 'categorical': Sequence(feature=Value(dtype='null', id=None), length=-1, id=None), 'non-categorical': [{'domain': Value(dtype='string', id=None), 'end': Value(dtype='int64', id=None), 'intent': Value(dtype='string', id=None), 'slot': Value(dtype='string', id=None), 'start': Value(dtype='int64', id=None), 'value': Value(dtype='string', id=None)}]}, 'speaker': Value(dtype='string', id=None), 'state': {'flights': {'airline': Value(dtype='string', id=None), 'date': Value(dtype='string', id=None), 'date.depart_intermediate': Value(dtype='string', id=None), 'date.depart_origin': Value(dtype='string', id=None), 'date.return': Value(dtype='string', id=None), 'destination1': Value(dtype='string', id=None), 'destination2': Value(dtype='string', id=None), 'fare': Value(dtype='string', id=None), 'flight_number': Value(dtype='string', id=None), 'from': Value(dtype='string', id=None), 'from.time': Value(dtype='string', id=None), 'luggage': Value(dtype='string', id=None), 'num.pax': Value(dtype='string', id=None), 'origin': Value(dtype='string', id=None), 'other_description': Value(dtype='string', id=None), 'price_range': Value(dtype='string', id=None), 'sea
              ...
              scription': Value(dtype='string', id=None), 'phone': Value(dtype='string', id=None), 'price_range': Value(dtype='string', id=None), 'rating': Value(dtype='string', id=None), 'sub-location': Value(dtype='string', id=None), 'time.reservation': Value(dtype='string', id=None), 'type.food': Value(dtype='string', id=None), 'type.meal': Value(dtype='string', id=None), 'type.seating': Value(dtype='string', id=None)}, 'sports': {'date.match': Value(dtype='string', id=None), 'day.match': Value(dtype='string', id=None), 'name.non_player': Value(dtype='string', id=None), 'name.player': Value(dtype='string', id=None), 'name.team': Value(dtype='string', id=None), 'other_description.match': Value(dtype='string', id=None), 'other_description.person': Value(dtype='string', id=None), 'other_description.team': Value(dtype='string', id=None), 'place.team': Value(dtype='string', id=None), 'position.player': Value(dtype='string', id=None), 'record.games_ahead': Value(dtype='string', id=None), 'record.games_back': Value(dtype='string', id=None), 'record.player': Value(dtype='string', id=None), 'record.team': Value(dtype='string', id=None), 'result.match': Value(dtype='string', id=None), 'score.match': Value(dtype='string', id=None), 'time.match': Value(dtype='string', id=None), 'venue': Value(dtype='string', id=None)}}, 'utt_idx': Value(dtype='int64', id=None), 'utterance': Value(dtype='string', id=None)}], 'dialogue_id': Value(dtype='string', id=None), 'data_split': Value(dtype='string', id=None)}
              because column names don't match
              
              During handling of the above exception, another exception occurred:
              
              Traceback (most recent call last):
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 1321, in compute_config_parquet_and_info_response
                  parquet_operations = convert_to_parquet(builder)
                File "/src/services/worker/src/worker/job_runners/config/parquet_and_info.py", line 935, in convert_to_parquet
                  builder.download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1027, in download_and_prepare
                  self._download_and_prepare(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1122, in _download_and_prepare
                  self._prepare_split(split_generator, **prepare_split_kwargs)
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 1882, in _prepare_split
                  for job_id, done, content in self._prepare_split_single(
                File "/src/services/worker/.venv/lib/python3.9/site-packages/datasets/builder.py", line 2013, in _prepare_split_single
                  raise DatasetGenerationCastError.from_cast_error(
              datasets.exceptions.DatasetGenerationCastError: An error occurred while generating the dataset
              
              All the data files must have the same columns, but at some point there are 3 new columns ({'state', 'dialogue_acts', 'intents'}) and 5 missing columns ({'original_id', 'dataset', 'turns', 'dialogue_id', 'data_split'}).
              
              This happened while the json dataset builder was generating data using
              
              zip://data/ontology.json::/tmp/hf-datasets-cache/medium/datasets/73265727319914-config-parquet-and-info-ConvLab-tm2-6af003ce/downloads/48bf4295eec3e2c77555430f4552c0e16b91f5ae5db39c6f7d3b11c83437039b
              
              Please either edit the data files to have matching columns, or separate them into different configurations (see docs at https://hf.co/docs/hub/datasets-manual-configuration#multiple-configurations)

Need help to make the dataset viewer work? Make sure to review how to configure the dataset viewer, and open a discussion for direct support.

domains
sequence
dataset
string
original_id
string
dialogue_id
string
turns
list
data_split
string
[ "flights" ]
tm2
dlg-00100680-00e0-40fe-8321-6d81b21bfc4f
tm2-train-0
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "flights", "end": 36, "intent": "inform", "slot": "type", "start": 26, "value": "round trip" }, { "domain": "flig...
train
[ "flights" ]
tm2
dlg-005d7a68-35ec-4ed0-a0ab-715a499b48b7
tm2-train-1
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "flights", "end": 53, "intent": "inform", "slot": "origin", "start": 46, "value": "Houston" }, { "domain": "fligh...
train
[ "flights" ]
tm2
dlg-006d8337-fc53-4aac-8895-b2f0caa14baa
tm2-train-2
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "system", "state": null, "utt_idx": 0, "utterance": "Hi. How can I help you?" }, { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ ...
train
[ "flights" ]
tm2
dlg-00754a9a-1b01-465d-adb9-5215a32d174d
tm2-train-3
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "system", "state": null, "utt_idx": 0, "utterance": "Hi, how can I help you?" }, { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ ...
train
[ "flights" ]
tm2
dlg-009c3fa1-6f6e-48dd-84c8-c52dbde6a4ae
tm2-train-4
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "system", "state": null, "utt_idx": 0, "utterance": "Hello user." }, { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { ...
train
[ "flights" ]
tm2
dlg-00e32998-0b0f-47f1-a4f0-2ce90f1718d0
tm2-train-5
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "flights", "end": 32, "intent": "inform", "slot": "type", "start": 22, "value": "round-trip" }, { "domain": "flig...
train
[ "flights" ]
tm2
dlg-011f951c-2231-4dca-a55b-4ef97e599e7e
tm2-train-6
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "system", "state": null, "utt_idx": 0, "utterance": "Hello. How can I help you?" }, { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [...
train
[ "flights" ]
tm2
dlg-019cbf4f-e4f4-40e5-b37d-e0d25be5d76a
tm2-train-7
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "flights", "end": 50, "intent": "inform", "slot": "origin", "start": 40, "value": "Los Angels" }, { "domain": "fl...
train
[ "flights" ]
tm2
dlg-01c15d77-d5ee-45f7-b149-386d4e04d26a
tm2-train-8
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "system", "state": null, "utt_idx": 0, "utterance": "Hello." }, { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { ...
train
[ "flights" ]
tm2
dlg-01d9b972-93b3-4e89-9eee-a460fa64d241
tm2-train-9
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "system", "state": null, "utt_idx": 0, "utterance": "Hi, how can I help you?" }, { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] ...
train
[ "flights" ]
tm2
dlg-01ef3e7d-d895-409e-8d5c-eb5c1c285a80
tm2-train-10
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "system", "state": null, "utt_idx": 0, "utterance": "Hello, how may I help you?" }, { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [...
train
[ "flights" ]
tm2
dlg-01ff0557-678d-4110-8768-ee7afbdcb2f2
tm2-train-11
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "system", "state": null, "utt_idx": 0, "utterance": "Hello. How can I help you?" }, { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [...
train
[ "flights" ]
tm2
dlg-01ff0a9f-8602-462a-affc-28039002fe80
tm2-train-12
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "user", "state": { "flights": { "airline": "", "date": "", "date.depart_intermediate": "", "date.depart_origin": "", "date.return": "", "d...
train
[ "flights" ]
tm2
dlg-0202ef4d-5de8-441c-b66c-29521cea52b3
tm2-train-13
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [] }, "speaker": "system", "state": null, "utt_idx": 0, "utterance": "Hi. How can I help you?" }, { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ ...
train
[ "flights" ]
tm2
dlg-020b769d-4904-4070-94de-60f7ce617348
tm2-train-14
[ { "dialogue_acts": { "binary": [], "categorical": [], "non-categorical": [ { "domain": "flights", "end": 29, "intent": "inform", "slot": "type", "start": 19, "value": "round-trip" }, { "domain": "flig...
train
End of preview.
YAML Metadata Warning: The task_categories "conversational" is not in the official list: text-classification, token-classification, table-question-answering, question-answering, zero-shot-classification, translation, summarization, feature-extraction, text-generation, fill-mask, sentence-similarity, text-to-speech, text-to-audio, automatic-speech-recognition, audio-to-audio, audio-classification, audio-text-to-text, voice-activity-detection, depth-estimation, image-classification, object-detection, image-segmentation, text-to-image, image-to-text, image-to-image, image-to-video, unconditional-image-generation, video-classification, reinforcement-learning, robotics, tabular-classification, tabular-regression, tabular-to-text, table-to-text, multiple-choice, text-ranking, text-retrieval, time-series-forecasting, text-to-video, image-text-to-text, image-text-to-image, image-text-to-video, visual-question-answering, document-question-answering, zero-shot-image-classification, graph-ml, mask-generation, zero-shot-object-detection, text-to-3d, image-to-3d, image-feature-extraction, video-text-to-text, keypoint-detection, visual-document-retrieval, any-to-any, video-to-video, other

Dataset Card for Taskmaster-2

To use this dataset, you need to install ConvLab-3 platform first. Then you can load the dataset via:

from convlab.util import load_dataset, load_ontology, load_database

dataset = load_dataset('tm2')
ontology = load_ontology('tm2')
database = load_database('tm2')

For more usage please refer to here.

Dataset Summary

The Taskmaster-2 dataset consists of 17,289 dialogs in the seven domains. Unlike Taskmaster-1, which includes both written "self-dialogs" and spoken two-person dialogs, Taskmaster-2 consists entirely of spoken two-person dialogs. In addition, while Taskmaster-1 is almost exclusively task-based, Taskmaster-2 contains a good number of search- and recommendation-oriented dialogs, as seen for example in the restaurants, flights, hotels, and movies verticals. The music browsing and sports conversations are almost exclusively search- and recommendation-based. All dialogs in this release were created using a Wizard of Oz (WOz) methodology in which crowdsourced workers played the role of a 'user' and trained call center operators played the role of the 'assistant'. In this way, users were led to believe they were interacting with an automated system that “spoke” using text-to-speech (TTS) even though it was in fact a human behind the scenes. As a result, users could express themselves however they chose in the context of an automated interface.

  • How to get the transformed data from original data:
    • Download master.zip.
    • Run python preprocess.py in the current directory.
  • Main changes of the transformation:
    • Remove dialogs that are empty or only contain one speaker.
    • Split each domain dialogs into train/validation/test randomly (8:1:1).
    • Merge continuous turns by the same speaker (ignore repeated turns).
    • Annotate dialogue acts according to the original segment annotations. Add intent annotation (==inform). The type of dialogue act is set to non-categorical if the slot is not in anno2slot in preprocess.py). Otherwise, the type is set to binary (and the value is empty). If there are multiple spans overlapping, we only keep the shortest one, since we found that this simple strategy can reduce the noise in annotation.
    • Add domain, intent, and slot descriptions.
    • Add state by accumulate non-categorical dialogue acts in the order that they appear.
    • Keep the first annotation since each conversation was annotated by two workers.
  • Annotations:
    • dialogue acts, state.

Supported Tasks and Leaderboards

NLU, DST, Policy, NLG

Languages

English

Data Splits

split dialogues utterances avg_utt avg_tokens avg_domains cat slot match(state) cat slot match(goal) cat slot match(dialogue act) non-cat slot span(dialogue act)
train 13838 234321 16.93 9.1 1 - - - 100
validation 1731 29349 16.95 9.15 1 - - - 100
test 1734 29447 16.98 9.07 1 - - - 100
all 17303 293117 16.94 9.1 1 - - - 100

7 domains: ['flights', 'food-ordering', 'hotels', 'movies', 'music', 'restaurant-search', 'sports']

  • cat slot match: how many values of categorical slots are in the possible values of ontology in percentage.
  • non-cat slot span: how many values of non-categorical slots have span annotation in percentage.

Citation

@inproceedings{byrne-etal-2019-taskmaster,
  title = {Taskmaster-1:Toward a Realistic and Diverse Dialog Dataset},
  author = {Bill Byrne and Karthik Krishnamoorthi and Chinnadhurai Sankar and Arvind Neelakantan and Daniel Duckworth and Semih Yavuz and Ben Goodrich and Amit Dubey and Kyu-Young Kim and Andy Cedilnik},
  booktitle = {2019 Conference on Empirical Methods in Natural Language Processing and 9th International Joint Conference on Natural Language Processing},
  address = {Hong Kong}, 
  year = {2019} 
}

Licensing Information

CC BY 4.0

Downloads last month
64

Models trained or fine-tuned on ConvLab/tm2

Paper for ConvLab/tm2