id_paragraph
stringlengths 20
26
| parag_1
stringlengths 101
3.02k
| parag_2
stringlengths 173
2.77k
| annot_1
dict | annot_2
dict | id_source
stringlengths 8
11
| id_target
stringlengths 8
11
| index_paragraph
int64 0
26
| list_sentences_1
listlengths 1
36
| list_sentences_2
listlengths 1
36
|
|---|---|---|---|---|---|---|---|---|---|
I-N2JgVIgy.BNpWofyXgi.00
|
In this subsection, we conduct ablation study on the superiority of thelow-dimensional contrastive embedding ( i.e. , our method) over the traditional contrastive embedding ( i.e. , thebaseline method). We use the STL-10and CIFAR-10 datasets to train the baseline SimCLR [7] and two implementations of CLLR, i.e. , the ℓ 2 , 1 -normbased regularization and nuclear-normbased regularization. We train all models with 100 and 400 epochs with the same batch size and learning rate, respectively, and we record the test accuracy of all methods by finetuning a linear softmax . The baseline method learns contrastive embeddings in the high-dimensionalspace (dimension = 2048 , 3072 , and 4096 ) and the simply fixed low-dimensional space (dimension =256 and 512 ). We also include the baseline results that do not use the ℓ 2 , 1 -norm and nuclear normconstraints ( i.e. , α = 0 ). Our method learns embeddings in low-dimensional space, where we use theregularizer to maintain the corresponding non-zero columns in the projection matrix L .
|
In this subsection, we conduct ablation study on the superiority of thelow-dimensional contrastive embedding ( i.e. , our method) over the traditional contrastive embedding ( i.e. , thebaseline method). We use the STL-10and CIFAR-10 datasets to train the baseline SimCLR [7] and two implementations of CLLR, i.e. , the (cid:96) 2 , 1 -norm basedregularization and nuclear-norm basedregularization. We train all models with 100 and 400 epochs with the same batch size and learning rate, respectively, and we record the test accuracy of all methods by fine-tuning a linear softmax . The baseline method learns contrastive embeddings in the high-dimensional space (where a commonsetting is 2048 -dimension) and the simply fixed low-dimensional space ( 256 -dimension and 512dimension). Our method learns embeddings in low-dimensional space, where we use the regularizerto maintain the corresponding non-zero columns in the projection matrix L .
|
{
"annotation": [
"Content_deletion",
"Rewriting_light"
],
"instruction": "Remove details about the baseline results and improve the readability.",
"annotator": "annotator_02"
}
|
{
"annotation": [
"Content_deletion",
"Rewriting_light"
],
"instruction": "Remove sentences that are unnecessary here. Simplify this text a bit.",
"annotator": "annotator_07"
}
|
I-N2JgVIgy
|
BNpWofyXgi
| 0
|
[
{
"text": "In this subsection, we conduct ablation study on the superiority of thelow-dimensional contrastive embedding ( i.e. , our method) over the traditional contrastive embedding ( i.e. , thebaseline method)."
},
{
"text": "We use the STL-10and CIFAR-10 datasets to train the baseline SimCLR [7] and two implementations of CLLR, i.e. , the ℓ 2 , 1 -normbased regularization and nuclear-normbased regularization."
},
{
"text": "We train all models with 100 and 400 epochs with the same batch size and learning rate, respectively, and we record the test accuracy of all methods by finetuning a linear softmax ."
},
{
"text": "The baseline method learns contrastive embeddings in the high-dimensionalspace (dimension = 2048 , 3072 , and 4096 ) and the simply fixed low-dimensional space (dimension =256 and 512 )."
},
{
"text": "We also include the baseline results that do not use the ℓ 2 , 1 -norm and nuclear normconstraints ( i.e. , α = 0 )."
},
{
"text": "Our method learns embeddings in low-dimensional space, where we use theregularizer to maintain the corresponding non-zero columns in the projection matrix L ."
}
] |
[
{
"text": "In this subsection, we conduct ablation study on the superiority of thelow-dimensional contrastive embedding ( i.e. , our method) over the traditional contrastive embedding ( i.e. , thebaseline method)."
},
{
"text": "We use the STL-10and CIFAR-10 datasets to train the baseline SimCLR [7] and two implementations of CLLR, i.e. , the (cid:96) 2 , 1 -norm basedregularization and nuclear-norm basedregularization."
},
{
"text": "We train all models with 100 and 400 epochs with the same batch size and learning rate, respectively, and we record the test accuracy of all methods by fine-tuning a linear softmax ."
},
{
"text": "The baseline method learns contrastive embeddings in the high-dimensional space (where a commonsetting is 2048 -dimension) and the simply fixed low-dimensional space ( 256 -dimension and 512dimension)."
},
{
"text": ""
},
{
"text": "Our method learns embeddings in low-dimensional space, where we use the regularizerto maintain the corresponding non-zero columns in the projection matrix L ."
}
] |
MXi6uEx-hp.rdZfFcGyf9.18
|
We compare the version of AGILE where the GAT only receives action features as input and no state. Thus, the decision-choice is still aware of other actions, but the learned relations are fixed, not dependent on the state. Figure 7 shows a drop in performance for Grid World and CREATE, where the relevant action relations change based on the state. However, there is no drop in RecSim because CPR task only requires knowing the most common category, which is independent of user state.
|
We evaluate a version of AGILE where the GAT only receives action representations as input and no state. Thus, the action relations are inferred independently of the state. Figure 7 shows a drop in performance for Grid World and CREATE, where the relevant action relations change based on the state. However, this effect is less apparent on RecSim because CPR requires only knowing the most common category, independent of user state.
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Rewrite the second sentence of the paragraph and improve the English in the remainder",
"annotator": "annotator_10"
}
|
{
"annotation": [
"Concision",
"Rewriting_medium"
],
"instruction": "Make this paragraph more concise, keeping the main points of each sentence.",
"annotator": "annotator_03"
}
|
MXi6uEx-hp
|
rdZfFcGyf9
| 18
|
[
{
"text": "We compare the version of AGILE where the GAT only receives action features as input and no state."
},
{
"text": "Thus, the decision-choice is still aware of other actions, but the learned relations are fixed, not dependent on the state."
},
{
"text": "Figure 7 shows a drop in performance for Grid World and CREATE, where the relevant action relations change based on the state."
},
{
"text": "However, there is no drop in RecSim because CPR task only requires knowing the most common category, which is independent of user state."
}
] |
[
{
"text": "We evaluate a version of AGILE where the GAT only receives action representations as input and no state."
},
{
"text": "Thus, the action relations are inferred independently of the state."
},
{
"text": "Figure 7 shows a drop in performance for Grid World and CREATE, where the relevant action relations change based on the state."
},
{
"text": "However, this effect is less apparent on RecSim because CPR requires only knowing the most common category, independent of user state."
}
] |
wSf7BpyxTb.ZCPjX5OcL.02
|
Therefore, to optimize the performance of SREDA further, we tune q, m from a grid search over { 10 , 100 , 200 } . For methods without variance reduction, i.e., for SAPD, SMDA and PASGDA, wetune the batch size from { 10 , 100 , 200 } as well. For SAPD and SAPD-VR, we tune the momentum θ from { 0 . 8 , 0 . 85 , 0 . 9 } and let the inner iteration numbers N = ln(265)ln( 1 θ ) according to eq.
|
Therefore, to optimize the performance of SREDA further, we tune q, m from a grid search over { 10 , 100 , 200 } . For methods without variance reduction, i.e., for SAPD+ , SMDA and PASGDA , we also use mini-batch to estimate the gradients and tune the batch size from { 10 , 100 , 200 } as well. For SAPD+ and SAPD+VR , we tune the momentum θ from { 0 . 8 , 0 . 85 , 0 . 9 } and the inner iteration number from N = { 10 , 50 , 100 } .
|
{
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
}
| null |
wSf7BpyxTb
|
ZCPjX5OcL
| 2
|
[
{
"text": "Therefore, to optimize the performance of SREDA further, we tune q, m from a grid search over { 10 , 100 , 200 } ."
},
{
"text": "For methods without variance reduction, i.e., for SAPD, SMDA and PASGDA, wetune the batch size from { 10 , 100 , 200 } as well."
},
{
"text": "For SAPD and SAPD-VR, we tune the momentum θ from { 0 . 8 , 0 . 85 , 0 . 9 } and let the inner iteration numbers N = ln(265)ln( 1 θ ) according to eq."
}
] |
[
{
"text": "Therefore, to optimize the performance of SREDA further, we tune q, m from a grid search over { 10 , 100 , 200 } ."
},
{
"text": "For methods without variance reduction, i.e., for SAPD+ , SMDA and PASGDA , we also use mini-batch to estimate the gradients and tune the batch size from { 10 , 100 , 200 } as well."
},
{
"text": "For SAPD+ and SAPD+VR , we tune the momentum θ from { 0 . 8 , 0 . 85 , 0 . 9 } and the inner iteration number from N = { 10 , 50 , 100 } ."
}
] |
F3z0hchpGy.xeuzrNJiNW.01
|
Vectors and scalars are not the only kind of geometric features that can be inputs and outputs of a GEM-CNN layer. In general, the coefficients of a geometric feature of C dimensions changes by a linear transformation ρ ( − g ) ∈ R C × C if the gauge is rotated by angle g . The map ρ : [0 , 2 π ) → R C × C is called the type of the geometric quantity and is formally known as a group representation of the planar rotation group SO(2) . From the theory of group representations, we know that any feature type can be composed from “irreducible representations” (irreps). For SO(2) , these are the one dimensional invariant scalar representation ρ 0 and for all n ∈ N > 0 , a two dimensional representation ρ n ,
|
Vectors and scalars are not the only type of geometric features that can be inputs and outputs of a GEM-CNN layer. In general, the coefficients of a geometric feature of C dimensions changes by an invertible linear transformation ρ ( − g ) ∈ R C × C if the gauge is rotated by angle g . The map ρ : [0 , 2 π ) → R C × C is called the type of the geometric quantity and is formally known as a group representation of the planar rotation group SO(2) . Group representations have the property that ρ ( g + h ) = ρ ( g ) ρ ( h ) (they are group homomorphisms), which implies in particular that ρ (0) = and ρ ( − g ) = ρ ( g ) − 1 . For more background on group representation theory, we refer the reader to (Serre, 1977) and, specifically in the context of equivariant deep learning, to (Lang & Weiler, 2020). From the theory of group representations, we know that any feature type can be composed from “irreducible representations” (irreps). For SO(2) , these are the one dimensional invariant scalar representation ρ 0 and for all n ∈ N > 0 , a two dimensional representation ρ n ,
|
{
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
}
| null |
F3z0hchpGy
|
xeuzrNJiNW
| 1
|
[
{
"text": "Vectors and scalars are not the only kind of geometric features that can be inputs and outputs of a GEM-CNN layer."
},
{
"text": "In general, the coefficients of a geometric feature of C dimensions changes by a linear transformation ρ ( − g ) ∈ R C × C if the gauge is rotated by angle g ."
},
{
"text": "The map ρ : [0 , 2 π ) → R C × C is called the type of the geometric quantity and is formally known as a group representation of the planar rotation group SO(2) . "
},
{
"text": ""
},
{
"text": ""
},
{
"text": " From the theory of group representations, we know that any feature type can be composed from “irreducible representations” (irreps)."
},
{
"text": "For SO(2) , these are the one dimensional invariant scalar representation ρ 0 and for all n ∈ N > 0 , a two dimensional representation ρ n ,"
}
] |
[
{
"text": "Vectors and scalars are not the only type of geometric features that can be inputs and outputs of a GEM-CNN layer."
},
{
"text": "In general, the coefficients of a geometric feature of C dimensions changes by an invertible linear transformation ρ ( − g ) ∈ R C × C if the gauge is rotated by angle g ."
},
{
"text": "The map ρ : [0 , 2 π ) → R C × C is called the type of the geometric quantity and is formally known as a group representation of the planar rotation group SO(2) . Group representations have the property that ρ ( g + h ) = ρ ( g ) ρ ( h ) (they are group homomorphisms), which implies in particular that ρ (0)"
},
{
"text": "="
},
{
"text": "and ρ ( − g )"
},
{
"text": "= ρ ( g ) − 1 . For more background on group representation theory, we refer the reader to (Serre, 1977) and, specifically in the context of equivariant deep learning, to (Lang & Weiler, 2020). From the theory of group representations, we know that any feature type can be composed from “irreducible representations” (irreps)."
},
{
"text": "For SO(2) , these are the one dimensional invariant scalar representation ρ 0 and for all n ∈ N > 0 , a two dimensional representation ρ n ,"
}
] |
rSY5h1VyMd.2RqWouzq_W.00
|
BERT in natural settings. They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses. This paper adopted psycholinguistic datasets to probe language models’ commonsense reasoning. Findings suggest that DistillBERT had some understanding of the (implied) intent that’s shared among most people. Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions. Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning.
|
BERT in natural settings. They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses. This paper adopted psycholinguistic datasets to probe language models’ commonsense reasoning. Findings suggest that GPT-3’s performance was mostly at chance in the psycholinguistic tasks. We also showed that DistillBERT had some understanding of the (implied) intent that’s shared among most people. Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions. Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning.
|
{
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_07"
}
| null |
rSY5h1VyMd
|
2RqWouzq_W
| 0
|
[
{
"text": "BERT in natural settings."
},
{
"text": "They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses."
},
{
"text": "This paper adopted psycholinguistic datasets to probe language models’ commonsense reasoning."
},
{
"text": ""
},
{
"text": "Findings suggest that DistillBERT had some understanding of the (implied) intent that’s shared among most people."
},
{
"text": "Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions."
},
{
"text": "Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning."
}
] |
[
{
"text": "BERT in natural settings."
},
{
"text": "They can provide the appropriate prompts and tasks to answer questions about linguistic mechanisms underlying predictive responses."
},
{
"text": "This paper adopted psycholinguistic datasets to probe language models’ commonsense reasoning."
},
{
"text": "Findings suggest that GPT-3’s performance was mostly at chance in the psycholinguistic tasks."
},
{
"text": "We also showed that DistillBERT had some understanding of the (implied) intent that’s shared among most people."
},
{
"text": "Such intent is implicitly reflected in the usage of conversational implicatures and presuppositions."
},
{
"text": "Whether or not fine-tuning improved its performance to human-level depends on the type of commonsense reasoning."
}
] |
hcpw8dPgCX.1UO51C7swt.00
|
Hofbauer & Weibull (1996) showed that in a class of learning dynamics which includes replicator dynamics — the continuous-time variant of FTRL, all iteratively strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees. Cohen et al. (2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions.
|
For this equivalence to hold, we need to allow dominance by mixed strategies, and correlated beliefs when there are more than two players. These conditions are met in the setting of this work. strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees. Cohen et al. (2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions.
|
{
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
}
| null |
hcpw8dPgCX
|
1UO51C7swt
| 0
|
[
{
"text": "Hofbauer & Weibull (1996) showed that in a class of learning dynamics which includes replicator dynamics — the continuous-time variant of FTRL, all iteratively strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees."
},
{
"text": "Cohen et al."
},
{
"text": "(2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions."
}
] |
[
{
"text": "For this equivalence to hold, we need to allow dominance by mixed strategies, and correlated beliefs when there are more than two players. These conditions are met in the setting of this work. strictly dominated actions vanish over time, while Mertikopoulos & Moustakas (2010) proved similar results for stochastic replicator dynamics; however, neither work provides finite-time guarantees."
},
{
"text": "Cohen et al."
},
{
"text": "(2017) proved that Hedge eliminates dominated actions in finite time, but did not extend their results to the more challenging case of iteratively dominated actions."
}
] |
nCTSF9BQJ.DGhBYSP_sR.19
|
Results According to Table 1, our RDE-Network outperforms all the baselines. Notably,RDENetwork improves per-structure correlations by a large margin, which implies that it is significantly more reliable for practical applications. The advantage of RDE-Network over MIF-Network shows that representations obtained by fitting rotamer densities are more effective than those from masked inverse folding because protein binding is driven by atomic interactions which RDE captures well by modeling the conformation of sidechain atoms.
|
Results According to Table 1, our RDE-Network outperforms all the baselines. Notably, it demonstrates a significant improvement in per-structure correlations, indicating its greater reliability for practical applications. The superior performance of RDE-Network over MIF-Network suggests that representations derived from fitting rotamer densities are more effective than those from masked inverse folding, as RDE captures atomic interactions well by modeling the conformation of sidechain atoms.
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Paraphrase this paragraph using formal language",
"annotator": "annotator_01"
}
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Revise this paragraph in a more academic style.",
"annotator": "annotator_07"
}
|
nCTSF9BQJ
|
DGhBYSP_sR
| 19
|
[
{
"text": "The datasets for training the rotamer density estimator are derived from PDB-REDO (Joosten et al., 2014), a database containing refined X-ray structures in PDB."
},
{
"text": "Structures with resolution worse than 3.5 ˚A are removed."
},
{
"text": "All the protein chains are clustered by 50% sequence identity. This process leads to 38,413 chain clusters, and they are randomly divided into the training set, the validation set, and the test set by 95%/0.5%/4.5%."
},
{
"text": "At training time, the data loader randomly selects a cluster and then randomly chooses a chain from the cluster."
},
{
"text": "During training, the structure is cropped into a patch containing 128 amino acids. The patch is constructed by first choosing a seed amino acid, and then finding its 127 nearest neighbors according to C-beta distances."
},
{
"text": "To emulate mutations, the rotamers of 10% of amino acids in the patch are masked. Noise is added to the rotamers of amino acids whose C-beta distance to the closest masked amino acid is less than 8 ˚A."
}
] |
[
{
"text": "The dataset for training the rotamer density estimator is derived from PDB-REDO (Joosten et al., 2014), which is a database containing refined X-ray structures in PDB."
},
{
"text": ""
},
{
"text": "The protein chains are clustered based on 50% sequence identity, leading to 38,413 chain clusters, which are randomly divided into the training, validation, and test sets by 95%/0.5%/4.5% respectively."
},
{
"text": "During training, the data loader randomly selects a cluster and then randomly chooses a chain from the cluster to ensure balanced sampling."
},
{
"text": "We crop structures into patches containing 128 residues by first choosing a seed residue, and then selecting its 127 nearest neighbors based on C-beta distances."
},
{
"text": "To simulate mutations, we masked the rotamers of 10% of residues in the patch, and we added noise to the rotamers of residues whose C-beta distance to the closest masked residue was less than 8 ˚A."
}
] |
7_CwM-IzWd.zcm6f5HDI.06
|
In this section, we introduce the greedy learner hypothesis to explain challenges observed in training multi-modal DNNs. Before describing our hypothesis, we start by discussing some assumptions on the multi-modal data and preliminary observations made in the literature on multi-modal learning.
|
In this section, we introduce the greedy learner hypothesis to explain challenges observed in training multi-modal DNNs. Before describing our hypothesis, we discuss some assumptions on the multi-modal data and preliminary observations made in the multi-modal learning literature.
|
{
"annotation": [
"Rewriting_light"
],
"instruction": "Make expression concise.",
"annotator": "annotator_08"
}
|
{
"annotation": [
"Rewriting_light"
],
"instruction": "Improve the English of this paragraph.",
"annotator": "annotator_02"
}
|
7_CwM-IzWd
|
zcm6f5HDI
| 6
|
[
{
"text": "In this section, we introduce the greedy learner hypothesis to explain challenges observed in training multi-modal DNNs."
},
{
"text": "Before describing our hypothesis, we start by discussing some assumptions on the multi-modal data and preliminary observations made in the literature on multi-modal learning."
}
] |
[
{
"text": "In this section, we introduce the greedy learner hypothesis to explain challenges observed in training multi-modal DNNs."
},
{
"text": "Before describing our hypothesis, we discuss some assumptions on the multi-modal data and preliminary observations made in the multi-modal learning literature."
}
] |
oH-CV7Qprn.l7-CEr3ki.00
|
Compared with ESTR in [17], (cid:15) -FALB in [15] and LowESTR in [ ], our algorithms are designedfor nonlinear reward framework. Compared with LowGLOC in [26], our algorithms achieve a betterregret bound, can work with varying action sets and are computationally feasible. For G-ESTT,we extend the GLM-UCB algorithms [11] via a regularization technique along with some noveltechniques. Our proposed G-ESTS is simple and could be easily implemented based on anystate-of-the-art generalized linear bandit algorithms. In particular, when we combine G-ESTS withsome efficient algorithms (e.g. SGD-TS [9]), the total time complexity after a warm-up stage scalesas O ( Tr ( d 1 + d 2 )) . We verify that G-ESTT and G-ESTS are the first two algorithms to attain the˜ O (( d 1 + d 2 ) r √ T ) optimal regret bound of low-rank matrix bandit problems up to logarithmic terms.
|
Compared with ESTR in [17], (cid:15) -FALB in [15] and LowESTR in [26], our algorithms are proposed forthe nonlinear reward framework with arbitrary action matrices. Compared with LowGLOC in [26], our algorithms not only achieve a better regret bound in theory, but also are computationally feasiblein practice. For G-ESTT, we extend the GLM-UCB algorithms [11] via a novel regularizationtechnique. Our proposed G-ESTS is simple and could be easily implemented based on anystate-of-the-art generalized linear bandit algorithms. In particular, when we combine G-ESTS withsome efficient algorithms (e.g. SGD-TS [9]), the total time complexity after a warm-up stage scalesas O ( Tr ( d 1 + d 2 )) . We verify that G-ESTT and G-ESTS are the first two algorithms to attain the˜ O (( d 1 + d 2 ) r √ T ) optimal regret bound of low-rank matrix bandit problems up to logarithmic terms.
|
{
"annotation": [
"Rewriting_light",
"Concision"
],
"instruction": "Improve the English of this paragraph",
"annotator": "annotator_10"
}
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Rewrite the first part of the paragraph to make it more convincing.",
"annotator": "annotator_07"
}
|
oH-CV7Qprn
|
l7-CEr3ki
| 0
|
[
{
"text": "Compared with ESTR in [17], (cid:15) -FALB in [15] and LowESTR in [ ], our algorithms are designedfor nonlinear reward framework."
},
{
"text": "Compared with LowGLOC in [26], our algorithms achieve a betterregret bound, can work with varying action sets and are computationally feasible."
},
{
"text": "For G-ESTT,we extend the GLM-UCB algorithms [11] via a regularization technique along with some noveltechniques."
},
{
"text": "Our proposed G-ESTS is simple and could be easily implemented based on anystate-of-the-art generalized linear bandit algorithms."
},
{
"text": "In particular, when we combine G-ESTS withsome efficient algorithms (e.g. SGD-TS [9]), the total time complexity after a warm-up stage scalesas O"
},
{
"text": "( Tr ( d 1 + d 2 )) ."
},
{
"text": "We verify that G-ESTT and G-ESTS are the first two algorithms to attain the˜ O"
},
{
"text": "(( d 1 + d 2 ) r √ T ) optimal regret bound of low-rank matrix bandit problems up to logarithmic terms."
}
] |
[
{
"text": "Compared with ESTR in [17], (cid:15) -FALB in [15] and LowESTR in [26], our algorithms are proposed forthe nonlinear reward framework with arbitrary action matrices."
},
{
"text": "Compared with LowGLOC in [26], our algorithms not only achieve a better regret bound in theory, but also are computationally feasiblein practice."
},
{
"text": "For G-ESTT, we extend the GLM-UCB algorithms [11] via a novel regularizationtechnique."
},
{
"text": "Our proposed G-ESTS is simple and could be easily implemented based on anystate-of-the-art generalized linear bandit algorithms."
},
{
"text": "In particular, when we combine G-ESTS withsome efficient algorithms (e.g. SGD-TS [9]), the total time complexity after a warm-up stage scalesas O"
},
{
"text": "( Tr ( d 1 + d 2 )) ."
},
{
"text": "We verify that G-ESTT and G-ESTS are the first two algorithms to attain the˜ O"
},
{
"text": "(( d 1 + d 2 ) r √ T ) optimal regret bound of low-rank matrix bandit problems up to logarithmic terms."
}
] |
u9NaukzyJ-.hh0KECXQLv.12
|
A and Design B , and 7.0 seconds with Design C . To complete this task with Design A , participants had tovertically scroll to the end of the day to see all entries. Three participants (P3, P5, and P7) complained about this. For example, P3 said “I find it’s a lot of scrolling down. It would be helpful if there was a way to condense it or to make it possible to see the entire calendar available in terms of morning, afternoon, and evening.” , and P8 said “The time frames are a bit big. So it makes like I said, it really makes it scroll off that you can’t see it all in one consolidated view” . With Design B , participants were expected to use the daily medica- tion summaries provided at the top. Three participants (P1, P6, and P9) found the daily summaries helpful in performing this task. For example, P9 said “Yeah, I like the idea of having the first row on the calendar dedicated only for the medications that needs to be taken. I think it brings an overall idea [of] what should be taken during that day.” . Three participants (P3, P4, and P5) also complained about the lines demarcating days not being clear. For example, P4 said “I have a harder time differentiating the calendar component the days, because there’s not a strong border between the days of the week.” .
|
Three participants (P3, P5, and P7) complained about the need to scroll to the end of the day with Design A . For example, P3 said “I find it’s a lot of scrolling down. It would be helpful if there was a way to condense it or to make it possible to see the entire calendar available in terms of morning, afternoon, and evening.” , and P8 said “The time frames are a bit big. So it makes like I said, it really makes it scroll off that you can’t see it all in one consolidated view” . With Design B , participants were expected to use the daily medication summaries provided at the top. Three participants (P1, P6, and P9) found the daily summaries helpful in performing this task. For example, P9 said “Yeah, I like the idea of having the first row on the calendar dedicated only for the medica- tions that needs to be taken. I think it brings an overall idea [of] what should be taken during that day.” . Three participants (P3, P4, and P5) also complained about the lines demarcating days not being clear. For example, P4 said “I have a harder time differentiating the calendar component the days, because there’s not a strong border between the days of the week.” .
|
{
"annotation": [
"Concision",
"Content_deletion"
],
"instruction": "Remove unnecessary details for the paragraph.",
"annotator": "annotator_03"
}
|
{
"annotation": [
"Concision"
],
"instruction": "The first sentence is a bit unclear.",
"annotator": "annotator_09"
}
|
u9NaukzyJ-
|
hh0KECXQLv
| 12
|
[
{
"text": "A and Design B , and 7.0 seconds with Design C . To complete this task with Design A , participants had tovertically scroll to the end of the day to see all entries. Three participants (P3, P5, and P7) complained about this."
},
{
"text": "For example, P3 said “I find it’s a lot of scrolling down."
},
{
"text": "It would be helpful if there was a way to condense it or to make it possible to see the entire calendar available in terms of morning, afternoon, and evening.” , and P8 said “The time frames are a bit big."
},
{
"text": "So it makes like I said, it really makes it scroll off that you can’t see it all in one consolidated view” ."
},
{
"text": "With Design B , participants were expected to use the daily medica- tion summaries provided at the top."
},
{
"text": "Three participants (P1, P6, and P9) found the daily summaries helpful in performing this task."
},
{
"text": "For example, P9 said “Yeah, I like the idea of having the first row on the calendar dedicated only for the medications that needs to be taken."
},
{
"text": "I think it brings an overall idea [of] what should be taken during that day.” ."
},
{
"text": "Three participants (P3, P4, and P5) also complained about the lines demarcating days not being clear."
},
{
"text": "For example, P4 said “I have a harder time differentiating the calendar component the days, because there’s not a strong border between the days of the week.” ."
}
] |
[
{
"text": "Three participants (P3, P5, and P7) complained about the need to scroll to the end of the day with Design A ."
},
{
"text": "For example, P3 said “I find it’s a lot of scrolling down."
},
{
"text": "It would be helpful if there was a way to condense it or to make it possible to see the entire calendar available in terms of morning, afternoon, and evening.” , and P8 said “The time frames are a bit big."
},
{
"text": "So it makes like I said, it really makes it scroll off that you can’t see it all in one consolidated view” ."
},
{
"text": "With Design B , participants were expected to use the daily medication summaries provided at the top."
},
{
"text": "Three participants (P1, P6, and P9) found the daily summaries helpful in performing this task."
},
{
"text": "For example, P9 said “Yeah, I like the idea of having the first row on the calendar dedicated only for the medica- tions that needs to be taken."
},
{
"text": "I think it brings an overall idea [of] what should be taken during that day.” ."
},
{
"text": "Three participants (P3, P4, and P5) also complained about the lines demarcating days not being clear."
},
{
"text": "For example, P4 said “I have a harder time differentiating the calendar component the days, because there’s not a strong border between the days of the week.” ."
}
] |
hAi0PMz9T7.Ut8ESfYp1.00
|
However, these “black box" policies lack interpretability, and reliability and, moreimportantly, cannot operate under the TCP datapath’s ultra-contingent latencyand computational constraints. This paper proposes a novel two-stage solutionto achieve the best of both worlds: first to train a deep RL agent, then distill its(over-)parameterized NN policy into white-box, light-weight rules in the formof symbolic expressions that are much easier to understand and to implementin constrained environments. At the core of our proposal is a novel symbolicbranching algorithm that allows the rule to be “context-aware” of various networkconditions, eventually converting the NN policy into a symbolic tree. The distilledsymbolic rules preserve and often improve performance over state-of-the-art NNpolicies while being faster and simpler than a standard neural network. We validatethe performance of our distilled symbolic rules on both simulation and emulationnetwork systems. Our code will be released upon acceptance.
|
However, such “black-box” policies lack interpretability and reliability, and often, they need to operate outside the traditional TCP datapath due to the use of complex NNs. This paper proposes a novel two-stage solution to achieve the best of both worlds: first to train a deep RL agent, then distill its (over-)parameterized NN policy into white-box, light-weight rules in the form of symbolic expressions that are much easier to understand and to implement in constrained environments. At the core of our proposal is a novel symbolic branching algorithm that enables the rule to be aware of the context in terms of various network conditions, eventually converting the NN policy into a symbolic tree. The distilled symbolic rules preserve and often improve performance over state-of-the-art NN policies while being faster and simpler than a standard neural network. We validate the performance of our distilled symbolic rules on both simulation and emulation environments. Our code is available at https://github.com/VITA-Group/SymbolicPCC .
|
{
"annotation": [
"Rewriting_medium",
"Content_substitution"
],
"instruction": "Review the following paragraph, update if possible, delete unnecessary details.",
"annotator": "annotator_01"
}
|
{
"annotation": [
"Rewriting_light",
"Content_substitution"
],
"instruction": "",
"annotator": "annotator_07"
}
|
hAi0PMz9T7
|
Ut8ESfYp1
| 0
|
[
{
"text": "However, these “black box\" policies lack interpretability, and reliability and, moreimportantly, cannot operate under the TCP datapath’s ultra-contingent latencyand computational constraints."
},
{
"text": " This paper proposes a novel two-stage solutionto achieve the best of both worlds: first to train a deep RL agent, then distill its(over-)parameterized NN policy into white-box, light-weight rules in the formof symbolic expressions that are much easier to understand and to implementin constrained environments."
},
{
"text": "At the core of our proposal is a novel symbolicbranching algorithm that allows the rule to be “context-aware” of various networkconditions, eventually converting the NN policy into a symbolic tree."
},
{
"text": "The distilledsymbolic rules preserve and often improve performance over state-of-the-art NNpolicies while being faster and simpler than a standard neural network."
},
{
"text": "We validatethe performance of our distilled symbolic rules on both simulation and emulationnetwork systems."
},
{
"text": "Our code will be released upon acceptance."
}
] |
[
{
"text": "However, such “black-box” policies lack interpretability and reliability, and often, they need to operate outside the traditional TCP datapath due to the use of complex"
},
{
"text": "NNs. This paper proposes a novel two-stage solution to achieve the best of both worlds: first to train a deep RL agent, then distill its (over-)parameterized NN policy into white-box, light-weight rules in the form of symbolic expressions that are much easier to understand and to implement in constrained environments."
},
{
"text": "At the core of our proposal is a novel symbolic branching algorithm that enables the rule to be aware of the context in terms of various network conditions, eventually converting the NN policy into a symbolic tree."
},
{
"text": "The distilled symbolic rules preserve and often improve performance over state-of-the-art NN policies while being faster and simpler than a standard neural network."
},
{
"text": "We validate the performance of our distilled symbolic rules on both simulation and emulation environments."
},
{
"text": "Our code is available at https://github.com/VITA-Group/SymbolicPCC ."
}
] |
S1fwAltvB.BkzbibmoH.00
|
We choose f to be a simple model: a single linear layer that maps from dimensionalityto 75. We use triplet loss (Schroff et al., 2015) to move the representation of the anchor vector V A closer to the representation of the positive vector V P and farther apart from the representation of the negative vector V N . Following Hoffer & Ailon (2015), we calculate the softmax version of the triplet loss:
|
We choose f to be a simple model: a single linear layer that maps from dimensionalityto 75. The dimensional of the transformation was chosen according to development set performance. We use triplet loss (Schroff et al., 2015) to move the representation of the anchor vector V A closer to the representation of the positive vector V P and farther apart from the representation of the negative vector V N . Following Hoffer & Ailon (2015), we calculate the softmax version of the triplet loss:
|
{
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_02"
}
|
{
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
}
|
S1fwAltvB
|
BkzbibmoH
| 0
|
[
{
"text": "We choose f to be a simple model: a single linear layer that maps from dimensionalityto 75."
},
{
"text": " We use triplet loss (Schroff et al., 2015) to move the representation of the anchor vector V A closer to the representation of the positive vector V P and farther apart from the representation of the negative vector V N ."
},
{
"text": "Following Hoffer & Ailon (2015), we calculate the softmax version of the triplet loss:"
}
] |
[
{
"text": "We choose f to be a simple model: a single linear layer that maps from dimensionalityto 75."
},
{
"text": "The dimensional of the transformation was chosen according to development set performance. We use triplet loss (Schroff et al., 2015) to move the representation of the anchor vector V A closer to the representation of the positive vector V P and farther apart from the representation of the negative vector V N ."
},
{
"text": "Following Hoffer & Ailon (2015), we calculate the softmax version of the triplet loss:"
}
] |
yxeD_Ju-SM.p9Au1Sb-uj.00
|
KG, D RAGON consists of a cross-modal encoder (GreaseLM) that fuses the input text-KG pair bidirectionally (§2.2), and a pretraining objective that performs bidirectional self-supervision on the text-KG input (§2.3). Our pretraining objective unifies masked language modeling (MLM) and KG link prediction (LinkPred) to make text and KG mutually inform each other and learn joint reasoning over them. Finally, we describe how we finetune the pretrained D RAGON model for downstream tasks (§2.4). While the individual piece of our approach (GreaseLM, MLM, LinkPred) is not new in itself, our contribution is that we are the first to bring these pieces together, present how to unify them effectively and show that this produces a significantly performant pretrained model (§3, §4).
|
KG, D RAGON consists of a cross-modal encoder (GreaseLM) that fuses the input text-KG pair bidirectionally (§2.2), and a pretraining objective that performs bidirectional self-supervision on the text-KG input (§2.3). Our pretraining objective unifies masked language modeling (MLM) and KG link prediction (LinkPred) to make text and KG mutually inform each other and learn joint reasoning over them. Finally, we describe how we finetune the pretrained D RAGON model for downstream tasks (§2.4). While each individual piece of our approach (GreaseLM, MLM, LinkPred) is not new in itself, we are the first to bring them together effectively and demonstrate that the resulting model has strong empirical results.
|
{
"annotation": [
"Concision"
],
"instruction": "Make this paragraph more concise.",
"annotator": "annotator_03"
}
|
{
"annotation": [
"Concision"
],
"instruction": "Make the last sentence more concise.",
"annotator": "annotator_07"
}
|
yxeD_Ju-SM
|
p9Au1Sb-uj
| 0
|
[
{
"text": "KG, D RAGON consists of a cross-modal encoder (GreaseLM) that fuses the input text-KG pair bidirectionally (§2.2), and a pretraining objective that performs bidirectional self-supervision on the text-KG input (§2.3)."
},
{
"text": "Our pretraining objective unifies masked language modeling (MLM) and KG link prediction (LinkPred) to make text and KG mutually inform each other and learn joint reasoning over them."
},
{
"text": "Finally, we describe how we finetune the pretrained D RAGON model for downstream tasks (§2.4)."
},
{
"text": "While the individual piece of our approach (GreaseLM, MLM, LinkPred) is not new in itself, our contribution is that we are the first to bring these pieces together, present how to unify them effectively and show that this produces a significantly performant pretrained model (§3, §4)."
}
] |
[
{
"text": "KG, D RAGON consists of a cross-modal encoder (GreaseLM) that fuses the input text-KG pair bidirectionally (§2.2), and a pretraining objective that performs bidirectional self-supervision on the text-KG input (§2.3)."
},
{
"text": "Our pretraining objective unifies masked language modeling (MLM) and KG link prediction (LinkPred) to make text and KG mutually inform each other and learn joint reasoning over them."
},
{
"text": "Finally, we describe how we finetune the pretrained D RAGON model for downstream tasks (§2.4)."
},
{
"text": "While each individual piece of our approach (GreaseLM, MLM, LinkPred) is not new in itself, we are the first to bring them together effectively and demonstrate that the resulting model has strong empirical results."
}
] |
aomiOZE_m2.rxb2TiQ6bq.08
|
Pruned Index Constraint . Filter pruning in residual networks is well-known tricky because the add operators in residual blocks demand the pruned filter indices must be aligned. Filter pruning within a residual block is shown in Fig. 2(b). A typical residual block (e.g., in EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b)) is made up of two convolutional layers. All the convolutional layers can be categorized into two groups based on their connection relationship among one another. One group comprises the layers that can be pruned without any constraint , dubbed free Conv layers in this work; the other comprises layers in which the filters must be pruned at the same indices , dubbed constrained Conv layers . For a concrete example, in Fig. 2(b), the layer W ( i ) is a free Conv layer and layer W ( i +1) is a constrained Conv layer.
|
Pruned Index Constraint . Pruning filters in residual networks is well-known non-trivial as the Add operators in residual blocks require the pruned filter indices across different residual blocks must be aligned. A figurative illustration of filter pruning within a residual block is shown in Fig. 2(b). A typical residual block (e.g., in EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b)) consists of two convolutional layers. According to the mutual connection relationship, the convolutional layers can be categorized into two groups. One group is made up with the layers that can be pruned without any constraint , dubbed free Conv layers in this work; the other comprises Conv layers whose filters must be pruned at the same indices , dubbed constrained Conv layers . Concretely, the layer W ( i ) in Fig. 2(b) is a free Conv layer, while the layer W ( i +1) is a constrained one.
|
{
"annotation": [
"Development",
"Rewriting_medium"
],
"instruction": "",
"annotator": "annotator_09"
}
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Improve the language to make it more fitting to the academic style.",
"annotator": "annotator_07"
}
|
aomiOZE_m2
|
rxb2TiQ6bq
| 8
|
[
{
"text": "Pruned Index Constraint ."
},
{
"text": "Filter pruning in residual networks is well-known tricky because the add operators in residual blocks demand the pruned filter indices must be aligned."
},
{
"text": "Filter pruning within a residual block is shown in Fig. 2(b)."
},
{
"text": "A typical residual block (e.g., in EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b)) is made up of two convolutional layers."
},
{
"text": "All the convolutional layers can be categorized into two groups based on their connection relationship among one another."
},
{
"text": "One group comprises the layers that can be pruned without any constraint , dubbed free Conv layers in this work; the other comprises layers in which the filters must be pruned at the same indices , dubbed constrained Conv layers ."
},
{
"text": "For a concrete example, in Fig. 2(b), the layer W ( i ) is a free Conv layer and layer W ( i +1) is a constrained Conv layer."
}
] |
[
{
"text": "Pruned Index Constraint ."
},
{
"text": "Pruning filters in residual networks is well-known non-trivial as the Add operators in residual blocks require the pruned filter indices across different residual blocks must be aligned."
},
{
"text": "A figurative illustration of filter pruning within a residual block is shown in Fig. 2(b)."
},
{
"text": "A typical residual block (e.g., in EDSR (Lim et al., 2017), RCAN (Zhang et al., 2018b)) consists of two convolutional layers."
},
{
"text": "According to the mutual connection relationship, the convolutional layers can be categorized into two groups."
},
{
"text": "One group is made up with the layers that can be pruned without any constraint , dubbed free Conv layers in this work; the other comprises Conv layers whose filters must be pruned at the same indices , dubbed constrained Conv layers ."
},
{
"text": "Concretely, the layer W ( i ) in Fig. 2(b) is a free Conv layer, while the layer W ( i +1) is a constrained one."
}
] |
vrvf56Ug_C.PgzrILJ_er.00
|
Another interesting future direction is to improve upper bounds on the pseudo-dimension by restricting heuristic functions to some classes, as mentioned in Section 2. Further study of this direction will be important, particularly when applying GBFS/A* with learned heuristics to path-finding instances with extremely many vertices. In Appendix D, we present an illustrative example where we can achieve polylog( n ) upper bounds on the pseudo-dimension by assuming that heuristic functions with much fewer tunable parameters than n can be designed in an instance-specific manner.
|
Another interesting future direction is to improve upper bounds on the pseudo-dimension by restricting heuristic functions to some classes. Appendix D will present an illustrative example where we can achieve polylog( n ) upper bounds on the pseudo-dimension by assuming that heuristic functions with much fewer tunable parameters than n can be designed in an instance-specific manner.
|
{
"annotation": [
"Concision"
],
"instruction": "Make this paragraph shorter by eliminating details about further work.",
"annotator": "annotator_02"
}
|
{
"annotation": [
"Content_deletion"
],
"instruction": "Delete the sentence about further study and the reference to section 2.",
"annotator": "annotator_07"
}
|
vrvf56Ug_C
|
PgzrILJ_er
| 0
|
[
{
"text": "Another interesting future direction is to improve upper bounds on the pseudo-dimension by restricting heuristic functions to some classes, as mentioned in Section 2."
},
{
"text": "Further study of this direction will be important, particularly when applying GBFS/A* with learned heuristics to path-finding instances with extremely many vertices."
},
{
"text": "In Appendix D, we present an illustrative example where we can achieve polylog( n ) upper bounds on the pseudo-dimension by assuming that heuristic functions with much fewer tunable parameters than n can be designed in an instance-specific manner."
}
] |
[
{
"text": "Another interesting future direction is to improve upper bounds on the pseudo-dimension by restricting heuristic functions to some classes."
},
{
"text": ""
},
{
"text": "Appendix D will present an illustrative example where we can achieve polylog( n ) upper bounds on the pseudo-dimension by assuming that heuristic functions with much fewer tunable parameters than n can be designed in an instance-specific manner."
}
] |
OV5v_wBMHk.bw4cqlpLh.14
|
Statistical estimators exhibit competitive performance on the PEHE metric. In particular, neuralnetwork estimators outperformthe linear and random forest methods because they can depict the nonlinearity in data. TARNet obtains better overall performance than other statistic estimators by absorbing the advantages (R et al., 2019) ofboth T-learner and S-learner. However, the treatment selection bias makes these estimators biased, leading to sub-optimal performance.
|
Statistical estimators exhibit competitive performance on the PEHE metric. Due to the superiority to depict non-linearity, neural estimators outperform linear and random forest methods. In particular, TARNet that absorbs the advantage of T-learner and S-learner achieves the best overall performance in statistic estimators. However, the circumvention to treatment selection bias leads to inferior performance.
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "The second sentence is too complicated. Make it more understandable. Also brush up the rest.",
"annotator": "annotator_05"
}
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Reorganize the ideas in the sentences to improve the logical flow of the text.",
"annotator": "annotator_07"
}
|
OV5v_wBMHk
|
bw4cqlpLh
| 14
|
[
{
"text": "Evaluation protocol."
},
{
"text": "Following Liuyi et al."
},
{
"text": "(2018), PEHE in Equation 3 is primarily used as the precision metric for performance evaluation."
},
{
"text": "However, PEHE is not computable during the model selection stage as the counterfactual outcome is unavailable."
},
{
"text": "As such, we use the Area Under the Uplift Curve (AUUC)"
},
{
"text": "(Betlei et al., 2021) to guide model selection, which reports the ranking performance and can be calculated with factual outcomes."
},
{
"text": "We report the within-sampleresults on the training dataset andthe out-of-sample results on the test dataset, where the factual outcome is available in the within-sample case following Uri et al."
}
] |
[
{
"text": "Evaluation protocol."
},
{
"text": "Following Liuyi et al."
},
{
"text": "(2018), PEHE in (3) is used as the precision metric for performance evaluation."
},
{
"text": "However, it is unavailable in the model selection phase due to missing counterfactuals."
},
{
"text": "As such, we use the area under the uplift curve (AUUC)"
},
{
"text": "(Betlei et al., 2021) to guide model selection, which measures the counterfactual ranking performance of the model and can be computed without counterfactual outcomes."
},
{
"text": "The within-sample and out-of-sample results are reported on the training and test data, respectively, following Shalit et al."
}
] |
c-9Hob6rd2.H4aN8Z9LDS.00
|
We then propose to divide states into several groups and incorporate them with an attention mechanism to select an appropriate state as a goal to encourage further exploration. In order tohelp RL agents learn efficiently, we update goal generation hindsight and value estimation with related trajectories.
|
Specifically, we first divide states into several groups according to their uncertainty and locations in the graph. We then adopt an attention mechanism to select an appropriate group and assign the state with the highest value in the graph as a goal to encourage further exploration. We also propose to update goal generation hindsightly and value estimation with related trajectories, to help RL agents learn efficiently.
|
{
"annotation": [
"Development",
"Rewriting_heavy"
],
"instruction": "",
"annotator": "annotator_03"
}
|
{
"annotation": [
"Rewriting_heavy"
],
"instruction": "I need more detailed explanations.",
"annotator": "annotator_09"
}
|
c-9Hob6rd2
|
H4aN8Z9LDS
| 0
|
[
{
"text": "We then propose to divide states into several groups and incorporate them with an attention mechanism to select an appropriate state as a goal to encourage further exploration."
},
{
"text": "In order tohelp RL agents learn efficiently, we update goal generation hindsight and value estimation with related trajectories."
}
] |
[
{
"text": "Specifically, we first divide states into several groups according to their uncertainty and locations in the graph. We then adopt an attention mechanism to select an appropriate group and assign the state with the highest value in the graph as a goal to encourage further exploration."
},
{
"text": "We also propose to update goal generation hindsightly and value estimation with related trajectories, to help RL agents learn efficiently."
}
] |
MXi6uEx-hp.rdZfFcGyf9.19
|
• AGILE-Tuned without sync-freq-change: In Mnih et al. (2015), the authors used the periodic syncing between the target and the main networks to alleviate the issue of the frequently moving target in updating the Q-network. In this work, we compared two extreme cases of the frequency period in syncing the networks; 10 ( Sync-freq=10 in Fig. 13 (a)) and 500 ( AGILE-Tuned ). • AGILE-Tuned without graph-dim-change: In order to understand the difficulty in expressing the action relation through the compact representation, we compared the big and the small representations in the action graph, i e., the node-features are encoded in 32 ( Graph-dim=32 ) or 64( AGILE-Tuned ) dimensions.
|
• AGILE-Tuned without sync-freq-change: In Mnih et al. (2015), the authors used the periodic syncing between the target and the main networks to alleviate the issue of frequently moving Qvalue targets. In this work, we compare two extreme cases of the sync frequency: 10 depicted by Sync-freq=10 in Fig. 13 (a) and 500 depicted by AGILE-Tuned . • AGILE-Tuned without graph-dim-change: To understand the difficulty in expressing the action relations through a compact representation, we compare two hidden dimension sizes. The node-features are encoded in 32 ( Graph-dim=32 ) or 64( AGILE-Tuned ) dimensions.
|
{
"annotation": [
"Concision",
"Rewriting_medium"
],
"instruction": "Improve the English of this paragraph and make it shorter.",
"annotator": "annotator_10"
}
|
{
"annotation": [
"Concision",
"Rewriting_medium"
],
"instruction": "Make the wording of this paragraph much more straight forward, to be more concise.",
"annotator": "annotator_03"
}
|
MXi6uEx-hp
|
rdZfFcGyf9
| 19
|
[
{
"text": "• AGILE-Tuned without sync-freq-change:"
},
{
"text": "In Mnih et al. (2015), the authors used the periodic syncing between the target and the main networks to alleviate the issue of the frequently moving target in updating the Q-network."
},
{
"text": "In this work, we compared two extreme cases of the frequency period in syncing the networks; 10 ( Sync-freq=10 in Fig."
},
{
"text": "13 (a)) and 500 ( AGILE-Tuned )."
},
{
"text": "• AGILE-Tuned without graph-dim-change: In order to understand the difficulty in expressing the action relation through the compact representation, we compared the big and the small representations in the action graph,"
},
{
"text": "i e., the node-features are encoded in 32 ( Graph-dim=32 ) or 64( AGILE-Tuned ) dimensions."
}
] |
[
{
"text": "• AGILE-Tuned without sync-freq-change:"
},
{
"text": "In Mnih et al. (2015), the authors used the periodic syncing between the target and the main networks to alleviate the issue of frequently moving Qvalue targets."
},
{
"text": "In this work, we compare two extreme cases of the sync frequency: 10 depicted by Sync-freq=10 in Fig."
},
{
"text": "13 (a) and 500 depicted by AGILE-Tuned ."
},
{
"text": "• AGILE-Tuned without graph-dim-change: To understand the difficulty in expressing the action relations through a compact representation, we compare two hidden dimension sizes."
},
{
"text": "The node-features are encoded in 32 ( Graph-dim=32 ) or 64( AGILE-Tuned ) dimensions."
}
] |
t0ArcyG8Tb.rF5n2PkfMW.00
|
Cross-Modal Alignment Objective Functions Most previous methods adopt triplet loss as a major objective function for video-language modeling. CGMSCD [13] points out that the triplet loss sometimes leads to a wrong learning direction and thus devises an adaptive margin triplet loss for representation learning. More recent works [40, 17, 18] propose to apply the InfoNCE contrastiveloss [46, 37, 6] to enhance representation learning. Particularly, COTS [31] introduces a momentummechanism [14] to maintain more negative samples for image-text contrastive learning. Following COTS, we propose momentum video-level contrastive learning for video-text global alignment. Note that MIL-NCE [34] enhances the InfoNCE loss with multiple-instance learning (MIL) to cope with the misaligned narration descriptions in HowTo100M [35]. In this paper, we thus propose momentum frame-level MSL-contrastive learning to assist in addressing the misaligned frame problem.
|
Functions Most previous methods adopt triplet loss as a major objective function for video-language modeling. CGMSCD [14] points out that the triplet loss sometimes leads to a wrong learning direction and thus devises an adaptive margin triplet loss for representation learning. More recent works [41, 18, 19] propose to apply the InfoNCE contrastive loss [47, 38, 6] to enhance representation learning. Particularly, COTS [32] and BriVL [12] introduce a momentum mechanism [15] to maintain more negative samples for image-text contrastive learning. Following these two state-of-the-art models, we propose momentum video-level contrastive learning for video-text global alignment in this paper. Note that MIL-NCE [35] enhances the InfoNCE loss with multiple-instance learning (MIL) to cope with the misaligned narration descriptions in HowTo100M [36]. In this work, we thus propose momentum frame-level MSL-contrastive learning to assist in addressing the misaligned frame problem.
|
{
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_02"
}
|
{
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
}
|
t0ArcyG8Tb
|
rF5n2PkfMW
| 0
|
[
{
"text": "Cross-Modal Alignment Objective Functions"
},
{
"text": "Most previous methods adopt triplet loss as a major objective function for video-language modeling."
},
{
"text": "CGMSCD [13] points out that the triplet loss sometimes leads to a wrong learning direction and thus devises an adaptive margin triplet loss for representation learning."
},
{
"text": "More recent works [40, 17, 18] propose to apply the InfoNCE contrastiveloss [46, 37, 6] to enhance representation learning."
},
{
"text": "Particularly, COTS [31] introduces a momentummechanism [14] to maintain more negative samples for image-text contrastive learning."
},
{
"text": "Following COTS, we propose momentum video-level contrastive learning for video-text global alignment."
},
{
"text": "Note that MIL-NCE [34] enhances the InfoNCE loss with multiple-instance learning (MIL) to cope with the misaligned narration descriptions in HowTo100M [35]."
},
{
"text": "In this paper, we thus propose momentum frame-level MSL-contrastive learning to assist in addressing the misaligned frame problem."
}
] |
[
{
"text": "Functions"
},
{
"text": "Most previous methods adopt triplet loss as a major objective function for video-language modeling."
},
{
"text": "CGMSCD [14] points out that the triplet loss sometimes leads to a wrong learning direction and thus devises an adaptive margin triplet loss for representation learning."
},
{
"text": "More recent works [41, 18, 19] propose to apply the InfoNCE contrastive loss [47, 38, 6] to enhance representation learning."
},
{
"text": "Particularly, COTS [32] and BriVL [12] introduce a momentum mechanism [15] to maintain more negative samples for image-text contrastive learning."
},
{
"text": "Following these two state-of-the-art models, we propose momentum video-level contrastive learning for video-text global alignment in this paper."
},
{
"text": "Note that MIL-NCE [35] enhances the InfoNCE loss with multiple-instance learning (MIL) to cope with the misaligned narration descriptions in HowTo100M [36]."
},
{
"text": "In this work, we thus propose momentum frame-level MSL-contrastive learning to assist in addressing the misaligned frame problem."
}
] |
CVRUl83zah.I75TtW0V7.08
|
Problems in DSPN Increasing the number of optimization steps in DSPN generally results in a better solution, but requires significantly more memory and computation time. To be able to backpropagate through the inner optimization, the activations of every intermediate step have to be kept in memory. Furthermore, each additional optimization step requires backpropagating that step in the backward pass as well. These issues limit the number of iterations (10 in DSPN) that are computationally feasible, which can have a negative effect on the modeling capacity. We aim to address these problems in the following.
|
Problems in DSPN. Increasing the number of optimization steps for solving Equation 7 generally results in a better solution (Zhang et al., 2019), but requires significantly more memory and computation time and can lead to training problems (Belanger et al., 2017). To be able to backpropagate through Equation 7, the activations of every intermediate gradient descent step have to be kept in memory. Each additional optimization step in the forward pass also requires backpropagating that step in the backward pass. These issues limit the number of iterations that are computationally feasible (DSPN uses only 10 steps), which can have a negative effect on the modeling capacity due to insufficient minimization of Equation 7. We aim to address these problems in the following.
|
{
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_07"
}
| null |
CVRUl83zah
|
I75TtW0V7
| 8
|
[
{
"text": "Problems in DSPN Increasing the number of optimization steps in DSPN generally results in a better solution, but requires significantly more memory and computation time."
},
{
"text": "To be able to backpropagate through the inner optimization, the activations of every intermediate step have to be kept in memory."
},
{
"text": "Furthermore, each additional optimization step requires backpropagating that step in the backward pass as well."
},
{
"text": "These issues limit the number of iterations (10 in DSPN) that are computationally feasible, which can have a negative effect on the modeling capacity."
},
{
"text": "We aim to address these problems in the following."
}
] |
[
{
"text": "Problems in DSPN. Increasing the number of optimization steps for solving Equation 7 generally results in a better solution (Zhang et al., 2019), but requires significantly more memory and computation time and can lead to training problems (Belanger et al., 2017)."
},
{
"text": "To be able to backpropagate through Equation 7, the activations of every intermediate gradient descent step have to be kept in memory."
},
{
"text": "Each additional optimization step in the forward pass also requires backpropagating that step in the backward pass."
},
{
"text": "These issues limit the number of iterations that are computationally feasible (DSPN uses only 10 steps), which can have a negative effect on the modeling capacity due to insufficient minimization of Equation 7."
},
{
"text": "We aim to address these problems in the following."
}
] |
xCwJIwby8o.8vT6si6OaEQ.00
|
AdaptFormer is only employed in recognition tasks in this work, it’s unclear whether it can workwell in tasks beyond recognition, e.g. , object detection and semantic segmentation. We leave it forthe future exploration. Since our method is specially designed for efficient fine-tuning, we do notforesee obvious undesirable ethical/social impacts at this moment.
|
AdaptFormer is only employed in recognition tasks in this work, it’s unclear whether it can workwell in tasks beyond recognition, e.g. , object detection and semantic segmentation. We leave it forthe future exploration. Since our method is specially designed for efficient fine-tuning, we do notforesee obvious undesirable ethical/social impacts at this moment. Checklist For all authors...
|
{
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_02"
}
|
{
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
}
|
xCwJIwby8o
|
8vT6si6OaEQ
| 0
|
[
{
"text": "AdaptFormer is only employed in recognition tasks in this work, it’s unclear whether it can workwell in tasks beyond recognition, e.g. , object detection and semantic segmentation."
},
{
"text": "We leave it forthe future exploration."
},
{
"text": "Since our method is specially designed for efficient fine-tuning, we do notforesee obvious undesirable ethical/social impacts at this moment. "
}
] |
[
{
"text": "AdaptFormer is only employed in recognition tasks in this work, it’s unclear whether it can workwell in tasks beyond recognition, e.g. , object detection and semantic segmentation."
},
{
"text": "We leave it forthe future exploration."
},
{
"text": "Since our method is specially designed for efficient fine-tuning, we do notforesee obvious undesirable ethical/social impacts at this moment. Checklist For all authors..."
}
] |
fB-ZoDze-Q.ZgK6YOyT9W.00
|
Reproducibility Statement. For theory, we provide proof and additional results in the Appendix. For empirical results, we provide implementation and environment details and hyperparameters in the Appendix. We also submit anonymous code in the supplemental materials.
|
R EPRODUCIBILITY S TATEMENT For theory, we provide proof and additional results in the Appendix. For empirical results, we provide implementation and environment details and hyperparameters in the Appendix. We also submit anonymous code in the supplemental materials.
|
{
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_03"
}
|
{
"annotation": [
"Unusable"
],
"instruction": "",
"annotator": "annotator_07"
}
|
fB-ZoDze-Q
|
ZgK6YOyT9W
| 0
|
[
{
"text": "Reproducibility Statement."
},
{
"text": "For theory, we provide proof and additional results in the Appendix."
},
{
"text": "For empirical results, we provide implementation and environment details and hyperparameters in the Appendix."
},
{
"text": "We also submit anonymous code in the supplemental materials."
}
] |
[
{
"text": "R EPRODUCIBILITY S TATEMENT"
},
{
"text": "For theory, we provide proof and additional results in the Appendix."
},
{
"text": "For empirical results, we provide implementation and environment details and hyperparameters in the Appendix."
},
{
"text": "We also submit anonymous code in the supplemental materials."
}
] |
pAdnbKIAaL.w-Mm4JV4h.00
|
Functional Causal Model In functional causal model (FCM), the relationships between variables are expressed through deterministic, functional equations: x i = f i ( pa i , u i ) , i = 1 , ..., N . The uncertainty in FCM is introduced via the assumption that variables u i , i = 1 , ..., N , are not observed (Pearl et al., 2000). If each function in FCM represents an autonomous mechanism, such FCM is called a structural model. Moreover, if each mechanism determines the value of one and only one variable, then the model is called a structural causal model (SCM). Taking the view from the SCM’s perspective, we want to learn a mixture of causal models whose inputs are pure latent variables and whose output is a single high-dimensional variable that describes complex data such as images.
|
Functional Causal Model In functional causal model (FCM), the relationships between variables are expressed through deterministic, functional equations: x i = f i ( pa i , u i ) , i = 1 , ..., N . The uncertainty in FCM is introduced via the assumption that variables u i , i = 1 , ..., N , are not observed (Pearl et al., 2000). If each function in FCM represents an autonomous mechanism, such FCM is called a structural model. Moreover, if each mechanism determines the value of one and only one variable, then the model is called a structural causal model (SCM). The SCMs form the basis for many statistical methods (Mooij & Heskes, 2013; Mooij et al., 2016) that aim at inferring knowledge of the underlying causal structure from data (Bongers et al., 2016). Taking the view from the SCM’s perspective, we want to learn a mixture of causal models whose inputs are pure latent variables and whose output is a single high-dimensional variable that describes complex data such as images.
|
{
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_10"
}
|
{
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_03"
}
|
pAdnbKIAaL
|
w-Mm4JV4h
| 0
|
[
{
"text": "Functional Causal Model"
},
{
"text": "In functional causal model (FCM), the relationships between variables are expressed through deterministic, functional equations: x i ="
},
{
"text": "f i ( pa i , u i ) , i = 1 , ..., N ."
},
{
"text": "The uncertainty in FCM is introduced via the assumption that variables"
},
{
"text": "u"
},
{
"text": "i , i = 1 , ..., N , are not observed (Pearl et al., 2000)."
},
{
"text": "If each function in FCM represents an autonomous mechanism, such FCM is called a structural model."
},
{
"text": "Moreover, if each mechanism determines the value of one and only one variable, then the model is called a structural causal model (SCM)."
},
{
"text": ""
},
{
"text": "Taking the view from the SCM’s perspective, we want to learn a mixture of causal models whose inputs are pure latent variables and whose output is a single high-dimensional variable that describes complex data such as images."
}
] |
[
{
"text": "Functional Causal Model"
},
{
"text": "In functional causal model (FCM), the relationships between variables are expressed through deterministic, functional equations: x i ="
},
{
"text": "f i ( pa i , u i ) , i = 1 , ..., N ."
},
{
"text": "The uncertainty in FCM is introduced via the assumption that variables"
},
{
"text": "u"
},
{
"text": "i , i = 1 , ..., N , are not observed (Pearl et al., 2000)."
},
{
"text": "If each function in FCM represents an autonomous mechanism, such FCM is called a structural model."
},
{
"text": "Moreover, if each mechanism determines the value of one and only one variable, then the model is called a structural causal model (SCM)."
},
{
"text": "The SCMs form the basis for many statistical methods (Mooij & Heskes, 2013; Mooij et al., 2016) that aim at inferring knowledge of the underlying causal structure from data (Bongers et al., 2016)."
},
{
"text": "Taking the view from the SCM’s perspective, we want to learn a mixture of causal models whose inputs are pure latent variables and whose output is a single high-dimensional variable that describes complex data such as images."
}
] |
Sx6SnclSL.nQLOUHvx8n.04
|
Hierarchical Modules. As reported in Table 7, on top of our final solution of Point-M2AE in the first row, we respectively experiment with removing the hierarchical encoder, hierarchical decoder,skip connections, and local spatial self-attention layers from our framework. Specifically, we replace our encoder and decoder with 1-stage plain architectures similar to MAE, which contains 15 and 2blocks of vanilla self-attention layers, respectively. We observe the absence of multi-stage structures either in encoder or decoder would hurt the performance, and the hierarchical encoder plays a better role than the decoder. Also, the skip connectionsand local spatial attention can well benefit thenetwork by providing complementary information and local inductive bias.
|
Hierarchical Modules. As reported in Table 7, on top of our final solution, Point-M2AE, in the first row, we respectively experiment with removing the hierarchical encoder, hierarchical decoder, and skip connections from our framework. Specifically, we replace our encoder and decoder with 1-stage plain architectures similar to MAE, which contains 15 and 2 vanilla transformer blocks, respectively. We observe the absence of multi-stage structures either in encoder or decoder hurts the performance, and the hierarchical encoder plays a better role than the decoder. Also, the skip connections well benefits the accuracy by providing complementary information for the decoder.
|
{
"annotation": [
"Rewriting_light",
"Concision"
],
"instruction": "Rewrite the last sentence to make it more concise.",
"annotator": "annotator_04"
}
|
{
"annotation": [
"Concision"
],
"instruction": "Make the paragraph shorter.",
"annotator": "annotator_07"
}
|
Sx6SnclSL
|
nQLOUHvx8n
| 4
|
[
{
"text": "Hierarchical Modules."
},
{
"text": "As reported in Table 7, on top of our final solution of Point-M2AE in the first row, we respectively experiment with removing the hierarchical encoder, hierarchical decoder,skip connections, and local spatial self-attention layers from our framework."
},
{
"text": "Specifically, we replace our encoder and decoder with 1-stage plain architectures similar to MAE, which contains 15 and 2blocks of vanilla self-attention layers, respectively."
},
{
"text": "We observe the absence of multi-stage structures either in encoder or decoder would hurt the performance, and the hierarchical encoder plays a better role than the decoder."
},
{
"text": "Also, the skip connectionsand local spatial attention can well benefit thenetwork by providing complementary information and local inductive bias."
}
] |
[
{
"text": "Hierarchical Modules."
},
{
"text": "As reported in Table 7, on top of our final solution, Point-M2AE, in the first row, we respectively experiment with removing the hierarchical encoder, hierarchical decoder, and skip connections from our framework."
},
{
"text": "Specifically, we replace our encoder and decoder with 1-stage plain architectures similar to MAE, which contains 15 and 2 vanilla transformer blocks, respectively."
},
{
"text": "We observe the absence of multi-stage structures either in encoder or decoder hurts the performance, and the hierarchical encoder plays a better role than the decoder."
},
{
"text": "Also, the skip connections well benefits the accuracy by providing complementary information for the decoder."
}
] |
hAi0PMz9T7.Ut8ESfYp1.01
|
Conventional TCP CC adopts a heuristic-based approach where the heuristic functions are manually crafted to adjust the traffic rate in a deterministic manner. Some proposals use packet loss as a signal for network congestion, e.g., Cubic [4], Reno [20],and NewReno [6], and others rely onthe variation of delay, e.g., Vegas [5]. Other CC designs combine packet lossand delay [21, 22]. Recently, different CC techniques specialized for data-center networks are alsoproposed [2, 3, 23].
|
Conventional TCP CC adopts a heuristic-based approach where the heuristic functions are manually crafted to adjust the traffic rate in a deterministic manner. Some proposals use packet loss as a signal for network congestion, e.g., Cubic [4], Reno [24], and NewReno [6]; while others rely on the variation of delay, e.g., Vegas [5], or combine packet loss and delay [25, 26]. Different CC techniques specialized for datacenter networks are also proposed [3, 27].
|
{
"annotation": [
"Concision"
],
"instruction": "Make the last sentence slightly shorter.",
"annotator": "annotator_04"
}
|
{
"annotation": [
"Concision"
],
"instruction": "Make the paragraph slightly shorter.",
"annotator": "annotator_07"
}
|
hAi0PMz9T7
|
Ut8ESfYp1
| 1
|
[
{
"text": "Conventional TCP CC adopts a heuristic-based approach where the heuristic functions are manually crafted to adjust the traffic rate in a deterministic manner."
},
{
"text": "Some proposals use packet loss as a signal for network congestion, e.g., Cubic [4], Reno [20],and NewReno [6], and others rely onthe variation of delay, e.g., Vegas [5]. Other CC designs combine packet lossand delay [21, 22]."
},
{
"text": "Recently, different CC techniques specialized for data-center networks are alsoproposed [2, 3, 23]."
}
] |
[
{
"text": "Conventional TCP CC adopts a heuristic-based approach where the heuristic functions are manually crafted to adjust the traffic rate in a deterministic manner."
},
{
"text": "Some proposals use packet loss as a signal for network congestion, e.g., Cubic [4], Reno [24], and NewReno [6]; while others rely on the variation of delay, e.g., Vegas [5], or combine packet loss and delay [25, 26]."
},
{
"text": "Different CC techniques specialized for datacenter networks are also proposed [3, 27]."
}
] |
BkxG1CvhWf.wcpE7maMLZ4.01
|
In this work we try to address that gap, and study the suitability of different state space topological properties as completeness thresholds for cost optimal planning with actions with 0-cost. We identify the sublist diameter as a completeness threshold, which has the advantage of being practically bounded. We also identify a new topological property, the subset diameter , as a completeness threshold and show that no tighter completeness threshold can be computed for a given problem without exploiting cost information, the ini- tial state, or the goal. To test the practical utility of the completeness thresholds we found, we devise a SAT compilation for cost optimal planning, and use that in an any-time planning as satisfiability algorithm, where the horizon is fixed from the beginning to the completeness threshold. This algorithm starts with an upper bound on the total cost and improves that cost upper bound every iteration. Experiments show that the algorithm is able to compute plans with costs better than the initial costs, and in many cases it can compute plans whose cost matches the optimal cost. Furthermore, the algorithm is able to prove the optimality of certain costs for a number of instances, some of which could not be proven optimal by the widely used LM-cut (Pommerening and Helmert 2012) planning heuristic.
|
In this work we try to address that gap, and study the suitability of different state space topological properties for being completeness thresholds for cost optimal planning with actions with 0-cost. We identify a completeness threshold that can be practically bounded, and show that no tighter completeness threshold can be computed for a given problem without exploiting cost information, the initial state, or the goal. To test the practical utility of this completeness threshold, we devise a SAT compilation for cost optimal planning, and use that in an any-time planning as satisfiabil- ity algorithm, where the horizon is fixed from the beginning to the completeness threshold. This algorithm starts with an upper bound on the total cost and improves that cost upper bound every iteration. Experiments show that the algorithm is able to compute plans with costs better than the initial costs, and in many cases it can compute plans whose cost matches the optimal cost. Furthermore, the algorithm is able to prove the optimality of certain costs for a number of instances, some of which could not be proven optimal by the widely used LM-cut planning heuristic.
|
{
"annotation": [
"Concision"
],
"instruction": "Shorten this paragraph.",
"annotator": "annotator_02"
}
|
{
"annotation": [
"Concision"
],
"instruction": "Make the beginning of this paragraph shorter.",
"annotator": "annotator_07"
}
|
BkxG1CvhWf
|
wcpE7maMLZ4
| 1
|
[
{
"text": "In this work we try to address that gap, and study the suitability of different state space topological properties as completeness thresholds for cost optimal planning with actions with 0-cost."
},
{
"text": "We identify the sublist diameter as a completeness threshold, which has the advantage of being practically bounded. We also identify a new topological property, the subset diameter , as a completeness threshold and show that no tighter completeness threshold can be computed for a given problem without exploiting cost information, the ini- tial state, or the goal."
},
{
"text": "To test the practical utility of the completeness thresholds we found, we devise a SAT compilation for cost optimal planning, and use that in an any-time planning as satisfiability algorithm, where the horizon is fixed from the beginning to the completeness threshold."
},
{
"text": "This algorithm starts with an upper bound on the total cost and improves that cost upper bound every iteration."
},
{
"text": "Experiments show that the algorithm is able to compute plans with costs better than the initial costs, and in many cases it can compute plans whose cost matches the optimal cost."
},
{
"text": "Furthermore, the algorithm is able to prove the optimality of certain costs for a number of instances, some of which could not be proven optimal by the widely used LM-cut (Pommerening and Helmert 2012) planning heuristic."
}
] |
[
{
"text": "In this work we try to address that gap, and study the suitability of different state space topological properties for being completeness thresholds for cost optimal planning with actions with 0-cost."
},
{
"text": "We identify a completeness threshold that can be practically bounded, and show that no tighter completeness threshold can be computed for a given problem without exploiting cost information, the initial state, or the goal."
},
{
"text": "To test the practical utility of this completeness threshold, we devise a SAT compilation for cost optimal planning, and use that in an any-time planning as satisfiabil- ity algorithm, where the horizon is fixed from the beginning to the completeness threshold."
},
{
"text": "This algorithm starts with an upper bound on the total cost and improves that cost upper bound every iteration."
},
{
"text": "Experiments show that the algorithm is able to compute plans with costs better than the initial costs, and in many cases it can compute plans whose cost matches the optimal cost."
},
{
"text": "Furthermore, the algorithm is able to prove the optimality of certain costs for a number of instances, some of which could not be proven optimal by the widely used LM-cut planning heuristic."
}
] |
33RNh69fYq.kMvWVl725x.03
|
Discussion . In this work, different kinds of objects are handled without being distinguished. We havenot used the category labels that may help the model better fit multi-class data. How to incorporatethe unified model with category labels should be further studied. In practical applications, the normalsamples are not as consistent as those in MVTec-AD. Therefore, the ability to deal with the scenarioswhere the normal samples share some diversity is important. Our UniAD is capable of handling all15 categories in MVTec-AD, hence would be more suitable for real scenes.
|
Discussion . In this work, different kinds of objects are handled without being distinguished. We havenot used the category labels that may help the model better fit multi-class data. How to incorporatethe unified model with category labels should be further studied. In practical uses, normal samples arenot as consistent as those in MVTec-AD, often manifest themselves in some diversity. Our UniADcould handle all 15 categories in MVTec-AD, hence would be more suitable for real scenes. However,anomaly detection may be used for video surveillance, which may infringe personal privacy.
|
{
"annotation": [
"Development",
"Rewriting_medium"
],
"instruction": "",
"annotator": "annotator_07"
}
| null |
33RNh69fYq
|
kMvWVl725x
| 3
|
[
{
"text": "Discussion ."
},
{
"text": "In this work, different kinds of objects are handled without being distinguished."
},
{
"text": "We havenot used the category labels that may help the model better fit multi-class data."
},
{
"text": "How to incorporatethe unified model with category labels should be further studied."
},
{
"text": "In practical applications, the normalsamples are not as consistent as those in MVTec-AD."
},
{
"text": "Therefore, the ability to deal with the scenarioswhere the normal samples share some diversity is important. Our UniAD is capable of handling all15 categories in MVTec-AD, hence would be more suitable for real scenes. "
}
] |
[
{
"text": "Discussion ."
},
{
"text": "In this work, different kinds of objects are handled without being distinguished."
},
{
"text": "We havenot used the category labels that may help the model better fit multi-class data."
},
{
"text": "How to incorporatethe unified model with category labels should be further studied."
},
{
"text": "In practical uses, normal samples arenot as consistent as those in MVTec-AD, often manifest themselves in some diversity."
},
{
"text": "Our UniADcould handle all 15 categories in MVTec-AD, hence would be more suitable for real scenes. However,anomaly detection may be used for video surveillance, which may infringe personal privacy."
}
] |
WldWha1MT.LL2ZsGpJga.01
|
We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting. Furthermore, we propose an efficient algorithm to compute TopoMatch for images. We show that TopoMatch is an interpretable metric to evaluate the topological correctness of segmentations. Moreover,we demonstrate how induced matchings can be used to train segmentation networks and improve the topological correctness of the segmentations across all 6 baseline datasets while preserving volumetricsegmentation performance.
|
We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting. Furthermore, we propose an efficient algorithm to compute TopoMatch for images. We show that TopoMatch is an interpretable metric to evaluate the topological correctness of segmentations, which is more sensitive than the well-established Betti number error. Moreover, the differentiability of the TopoMatch loss enables its use as a loss function. It improves the topological performance of segmentation networks across six diverse datasets while preserving the volumetric performance.
|
{
"annotation": [
"Rewriting_medium",
"Development"
],
"instruction": "",
"annotator": "annotator_07"
}
| null |
WldWha1MT
|
LL2ZsGpJga
| 1
|
[
{
"text": "We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting."
},
{
"text": "Furthermore, we propose an efficient algorithm to compute TopoMatch for images."
},
{
"text": "We show that TopoMatch is an interpretable metric to evaluate the topological correctness of segmentations."
},
{
"text": "Moreover,we demonstrate how induced matchings can be used to train segmentation networks and improve the topological correctness of the segmentations across all 6 baseline datasets while preserving volumetricsegmentation performance."
}
] |
[
{
"text": "We show how induced matchings guarantee the spatially correct matching between barcodes in a segmentation setting."
},
{
"text": "Furthermore, we propose an efficient algorithm to compute TopoMatch for images."
},
{
"text": "We show that TopoMatch is an interpretable metric to evaluate the topological correctness of segmentations, which is more sensitive than the well-established Betti number error."
},
{
"text": "Moreover, the differentiability of the TopoMatch loss enables its use as a loss function. It improves the topological performance of segmentation networks across six diverse datasets while preserving the volumetric performance."
}
] |
9wfZbn73om.FhHH15YtKt.01
|
In summary, our contributions include: 1) proposing a novel ( σ, δ ) -measure to quantify the data augmentation; 2) proposing a theoretical framework for contrastive SSL, which suggests that alignment, divergence, and concentration are key factors of generalization ability; 3) provably verifying that not only the InfoNCE loss but also the cross-correlation loss satisfy the alignment and divergence; 4) empirically showing that the concentration w.r.t. the proposed augmented distance is highly related to the downstream performance.
|
In summary, our contributions include: 1) proposing a novel ( σ, δ ) -measure to quantify data augmentation; 2) presenting a theoretical framework for contrastive SSL that highlights alignment, divergence, and concentration as key factors for generalization ability; provably verifying that not only the InfoNCE loss but also the cross-correlation loss satisfy alignment and divergence; 4) showing a strong correlation between downstream performance and concentration of augmented data.
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Make the sentence precise.",
"annotator": "annotator_08"
}
|
{
"annotation": [
"Rewriting_light"
],
"instruction": "Improve english in this text.",
"annotator": "annotator_07"
}
|
9wfZbn73om
|
FhHH15YtKt
| 1
|
[
{
"text": "In summary, our contributions include: 1) proposing a novel ( σ, δ ) -measure to quantify the data augmentation; 2) proposing a theoretical framework for contrastive SSL, which suggests that alignment, divergence, and concentration are key factors of generalization ability; 3) provably verifying that not only the InfoNCE loss but also the cross-correlation loss satisfy the alignment and divergence; 4) empirically showing that the concentration w.r.t."
},
{
"text": "the proposed augmented distance is highly related to the downstream performance."
}
] |
[
{
"text": "In summary, our contributions include: 1) proposing a novel ( σ, δ ) -measure to quantify data augmentation; 2) presenting a theoretical framework for contrastive SSL that highlights alignment, divergence, and concentration as key factors for generalization ability; provably verifying that not only the InfoNCE loss but also the cross-correlation loss satisfy alignment and divergence;"
},
{
"text": "4) showing a strong correlation between downstream performance and concentration of augmented data."
}
] |
hegI87bI5S.fL6Q48sfx8.13
|
We observed the main effect of W ( F 2 , 22 = 25 . 3, p < 0 . 001, η 2 p = 0 . 967) (Figure 4 (i)). Pair-wise comparisons showed that the error rates increased as W decreased. The other parameters did not show the main effects . No significant interaction was observed.
|
We observed the main effect of W ( F 2 , 22 = 25 . 3, p < 0 . 001, η 2 p = 0 . 967) (Figure 4 (i)). The pair-wise comparisons showed that error rates increased with a decrease in W . The other parameters did not show the main effects. No significant interaction was observed.
|
{
"annotation": [
"Rewriting_light"
],
"instruction": "Replace some words in the paragraph",
"annotator": "annotator_10"
}
|
{
"annotation": [
"Rewriting_light"
],
"instruction": "Slightly revise for readability.",
"annotator": "annotator_07"
}
|
hegI87bI5S
|
fL6Q48sfx8
| 13
|
[
{
"text": "We observed the main effect of W ( F 2 , 22 = 25 ."
},
{
"text": "3, p < 0 ."
},
{
"text": "001, η 2 p = 0 . 967) (Figure 4 (i))."
},
{
"text": "Pair-wise comparisons showed that the error rates increased as W decreased."
},
{
"text": "The other parameters did not show the main effects ."
},
{
"text": "No significant interaction was observed."
}
] |
[
{
"text": "We observed the main effect of W ( F 2 , 22 = 25 ."
},
{
"text": "3, p < 0 ."
},
{
"text": "001, η 2 p = 0 . 967) (Figure 4 (i))."
},
{
"text": "The pair-wise comparisons showed that error rates increased with a decrease in W ."
},
{
"text": "The other parameters did not show the main effects."
},
{
"text": "No significant interaction was observed."
}
] |
F3z0hchpGy.xeuzrNJiNW.03
|
As all figures in the FAUST data set are similarly meshed and oriented, breaking the gauge equivariance in higher layers can actually be beneficial. As shown in Weiler & Cesa (2019), symmetry can be broken by treating non-invariant features as invariant features as input to the final 1 × 1 convolution. Such architectures are equivariant on lower levels, while allowing orientation sensitivity at higher layers.
|
As all meshes in the FAUST dataset share the same topology, breaking the gauge equivariance in higher layers can actually be beneficial. As shown in (Weiler & Cesa, 2019), symmetry can be broken by treating non-invariant features as invariant features as input to the final 1 × 1 convolution. Such architectures are equivariant on lower levels, while allowing orientation sensitivity at higher layers.
|
{
"annotation": [
"Rewriting_light"
],
"instruction": "Rephrase the paragraph",
"annotator": "annotator_06"
}
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Rephrase the first sentence.",
"annotator": "annotator_07"
}
|
F3z0hchpGy
|
xeuzrNJiNW
| 3
|
[
{
"text": "As all figures in the FAUST data set are similarly meshed and oriented, breaking the gauge equivariance in higher layers can actually be beneficial."
},
{
"text": "As shown in Weiler & Cesa (2019), symmetry can be broken by treating non-invariant features as invariant features as input to the final 1 × 1 convolution."
},
{
"text": "Such architectures are equivariant on lower levels, while allowing orientation sensitivity at higher layers."
}
] |
[
{
"text": "As all meshes in the FAUST dataset share the same topology, breaking the gauge equivariance in higher layers can actually be beneficial."
},
{
"text": "As shown in (Weiler & Cesa, 2019), symmetry can be broken by treating non-invariant features as invariant features as input to the final 1 × 1 convolution."
},
{
"text": "Such architectures are equivariant on lower levels, while allowing orientation sensitivity at higher layers."
}
] |
r1bMvE4aE.Sy4sJiGkr.00
|
In order to evaluate the feasibility of our suggested approach for deriving diverse sets of plans according to various existing metrics, we have implemented our approach on top of the Fast Downward planning system (Helmert 2006). The code can be made available upon request. Further, we implemented an external component, that given a set of plans and a metric returns the score of the set under that metric (Katz and Sohrabi 2019).
|
In order to evaluate the feasibility of our suggested approach for deriving diverse sets of plans according to various existing metrics, we have implemented our approach on top of the Fast Downward planning system (Helmert 2006). Our planners, ForbidIterative (FI) diverse planners are available as part of the collection of ForbidIterative planners (Katz, Sohrabi, and Udrea 2019a). Further, we implemented an external component, that given a set of plans and a metric returns the score of the set under that metric (Katz and Sohrabi 2019).
|
{
"annotation": [
"Content_substitution",
"Development"
],
"instruction": "",
"annotator": "annotator_07"
}
| null |
r1bMvE4aE
|
Sy4sJiGkr
| 0
|
[
{
"text": "In order to evaluate the feasibility of our suggested approach for deriving diverse sets of plans according to various existing metrics, we have implemented our approach on top of the Fast Downward planning system (Helmert 2006)."
},
{
"text": "The code can be made available upon request."
},
{
"text": "Further, we implemented an external component, that given a set of plans and a metric returns the score of the set under that metric (Katz and Sohrabi 2019)."
}
] |
[
{
"text": "In order to evaluate the feasibility of our suggested approach for deriving diverse sets of plans according to various existing metrics, we have implemented our approach on top of the Fast Downward planning system (Helmert 2006)."
},
{
"text": "Our planners, ForbidIterative (FI) diverse planners are available as part of the collection of ForbidIterative planners (Katz, Sohrabi, and Udrea 2019a)."
},
{
"text": "Further, we implemented an external component, that given a set of plans and a metric returns the score of the set under that metric (Katz and Sohrabi 2019)."
}
] |
p8yrWJS4W.eHA5NswPr.00
|
In practice, we do not have access to the true distribution p w . Rather, we are typically given a corpus { w p w n } N n =1 , whose instances we assume to be sampled i.i.d. from p w . The common approach to address this shortcoming is (when possible) to derive a statistical estimator (cid:98) ∆ that uses this corpus to approximate ∆ . There are two common strategies for building such estimators: Monte Carlo estimation and plug-in estimation. 3.2. M ONTE C ARLO E STIMATION Our i.i.d. assumption w.r.t. samples w p w ∼ p w allows us to derive a Monte Carlo estimator for certain divergences. We start with the forward KL divergence—present in both ∆ → and ∆ exp :
|
In practice, we do not have access to the true distribution p w . Rather, we are typically given a corpus { w p w n } N n =1 , whose instances we assume to be sampled i.i.d. from p w . The common approach to address this issue is thus to derive a statistical estimator (cid:98) ∆ that uses this corpus to approximate ∆ . There are two common strategies for building such estimators: Monte Carlo and plug-in estimation. Monte Carlo Estimation. Our i.i.d. assumption w.r.t. samples in { w p w n } Nn =1 allows us to derive a Monte Carlo estimator for certain divergences. We start with the forward KL divergence:
|
{
"annotation": [
"Concision"
],
"instruction": "Fix formatting issues and simplify the wording of the paragraph.",
"annotator": "annotator_03"
}
|
{
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Fix the caplocks problem. Slightly shorten the paragraph.",
"annotator": "annotator_07"
}
|
p8yrWJS4W
|
eHA5NswPr
| 0
|
[
{
"text": "In practice, we do not have access to the true distribution p w ."
},
{
"text": "Rather, we are typically given a corpus { w p w n } N n =1 , whose instances we assume to be sampled i.i.d."
},
{
"text": "from p w ."
},
{
"text": "The common approach to address this shortcoming is (when possible) to derive a statistical estimator (cid:98) ∆ that uses this corpus to approximate ∆ ."
},
{
"text": "There are two common strategies for building such estimators: Monte Carlo estimation and plug-in estimation. 3.2. M ONTE C ARLO E STIMATION"
},
{
"text": "Our i.i.d."
},
{
"text": "assumption w.r.t."
},
{
"text": "samples w p w ∼ p w allows us to derive a Monte Carlo estimator for certain divergences."
},
{
"text": "We start with the forward KL divergence—present in both ∆ → and ∆ exp :"
}
] |
[
{
"text": "In practice, we do not have access to the true distribution p w ."
},
{
"text": "Rather, we are typically given a corpus { w p w n } N n =1 , whose instances we assume to be sampled i.i.d."
},
{
"text": "from p w ."
},
{
"text": "The common approach to address this issue is thus to derive a statistical estimator (cid:98) ∆ that uses this corpus to approximate ∆ ."
},
{
"text": "There are two common strategies for building such estimators: Monte Carlo and plug-in estimation. Monte Carlo Estimation."
},
{
"text": "Our i.i.d."
},
{
"text": "assumption w.r.t."
},
{
"text": "samples in { w p w n } Nn =1 allows us to derive a Monte Carlo estimator for certain divergences."
},
{
"text": "We start with the forward KL divergence:"
}
] |
SRquLaHRM4.vI2x5N-YHC.03
|
Q: What is the extra computation time cost of PLOT over CoOp baseline? A: Around 10 %inference speed and 5 % training time . Despite the performance improvement, the extra computationcost is still a limitation of PLOT. Please see the detailed analysis in the supplementary materials.
|
Q: What is the extra computation time cost of PLOT over CoOp baseline? A: Around 10 %inference speed and 5 % training time . Please see the detailed comparisons and analysis in thesupplementary materials.
|
{
"annotation": [
"Content_deletion"
],
"instruction": "Remove any redundant information that is not essential for the research question answered.",
"annotator": "annotator_03"
}
|
{
"annotation": [
"Content_deletion",
"Development"
],
"instruction": "",
"annotator": "annotator_09"
}
|
SRquLaHRM4
|
vI2x5N-YHC
| 3
|
[
{
"text": "Q: What is the extra computation time cost of PLOT over CoOp baseline?"
},
{
"text": "A: Around 10 %inference speed and 5 % training time ."
},
{
"text": "Despite the performance improvement, the extra computationcost is still a limitation of PLOT."
},
{
"text": "Please see the detailed analysis in the supplementary materials."
}
] |
[
{
"text": "Q: What is the extra computation time cost of PLOT over CoOp baseline?"
},
{
"text": "A: Around 10 %inference speed and 5 % training time ."
},
{
"text": ""
},
{
"text": "Please see the detailed comparisons and analysis in thesupplementary materials."
}
] |
nkOpNqg-ip.OwJsIhe_p.00
|
Surprisingly, the baselines used to assess the performance of AutoML tools are typically only other AutoML tools but no “simple” baselines. For example, a very simple baseline would be to imitate the steps a human data scientist would take, so that such an approach should be at least considered as a baseline. Without such baselines, we do not learn how AutoML tools improve upon ad-hoc techniques but only how they compare relatively to each other. To our knowledge, the only work accounting for such baselines is Thornton et al. (2013) using the Exhaustive-Default (“Ex-def”) baseline, which is to take the default parametrized model that is best in a cross-validation. They also discuss a grid search, which is however not applicable in practice.
|
The baselines used to assess the performance of AutoML tools are often other AutoML tools or random search. A simple but perhaps more sensible baseline than random search would be to imitate the steps a human data scientist would take. Without such baselines, we do not learn how AutoML tools improve upon ad-hoc techniques but only how they compare relatively to each other. To our knowledge, the only work accounting for such baselines is (Thornton et al., 2013), using the Exhaustive-Default (“Ex-def”) baseline which is to take the default parametrized model that is best in a cross-validation. They also discuss a grid search, which is however not applicable in practice.
|
{
"annotation": [
"Rewriting_light"
],
"instruction": "Edit some formulations to sound more neutral.",
"annotator": "annotator_04"
}
|
{
"annotation": [
"Concision",
"Rewriting_light"
],
"instruction": "Make the beginning of the paragraph shorter.",
"annotator": "annotator_07"
}
|
nkOpNqg-ip
|
OwJsIhe_p
| 0
|
[
{
"text": "Surprisingly, the baselines used to assess the performance of AutoML tools are typically only other AutoML tools but no “simple” baselines."
},
{
"text": "For example, a very simple baseline would be to imitate the steps a human data scientist would take, so that such an approach should be at least considered as a baseline."
},
{
"text": "Without such baselines, we do not learn how AutoML tools improve upon ad-hoc techniques but only how they compare relatively to each other."
},
{
"text": "To our knowledge, the only work accounting for such baselines is Thornton et al. (2013) using the Exhaustive-Default (“Ex-def”) baseline, which is to take the default parametrized model that is best in a cross-validation."
},
{
"text": "They also discuss a grid search, which is however not applicable in practice."
}
] |
[
{
"text": "The baselines used to assess the performance of AutoML tools are often other AutoML tools or random search."
},
{
"text": "A simple but perhaps more sensible baseline than random search would be to imitate the steps a human data scientist would take."
},
{
"text": "Without such baselines, we do not learn how AutoML tools improve upon ad-hoc techniques but only how they compare relatively to each other."
},
{
"text": "To our knowledge, the only work accounting for such baselines is (Thornton et al., 2013), using the Exhaustive-Default (“Ex-def”) baseline which is to take the default parametrized model that is best in a cross-validation."
},
{
"text": "They also discuss a grid search, which is however not applicable in practice."
}
] |
Iw0CmVAYR5.JQTOJMtn3t.00
|
Quantization effect In Appendix 4.6, we also study how the performance of DAT is robust to gradient quantization. We find that when the number of bits is reduced from 32 to 8 , the resulting TA and RA becomes slightly worse than the best 32 -bit case. For example, in the worst case of CIFAR-10, TA drops 0 . 91% and 6 . 33% for DAT-PGD and DAT-FGSM, respectively. And RA drops 4 . 73% and 5 . 22% , respectively. However, the use of quantization reduces the amount of data transmission per iteration. We also show that if a high performance computing cluster of nodes (with NVLink high-speed GPU interconnect (Foley & Danskin, 2017)) is used, the communication cost can be further reduced.
|
Quantization effect In Appendix 4.6, we also study how the performance of DAT is affected by gradient quantization. We find that when the number of bits is reduced from 32 to 8 , the resulting TA and RA becomes worse than the best 32 -bit case. For example, in the worst case (8-bit 2-sided quantization) of CIFAR-10, TA drops 1 . 52% and 6 . 32% for DAT-PGD and DAT-FGSM, respectively. And RA drops 4 . 74% and 5 . 58% , respectively. Note that our main communication configuration is given by Ring-AllReduce that calls for 1-sided (rather than 2-sided) quantization. We also observe that DAT-FGSM is more sensitive to effect of gradient quantization than DAT-PGD. Even in the centralized setting, the use of 8-bit quantization can lead to a non-trivial drop in TA (see Table A5). However, the use of quantization reduces the amount of data transmission per iteration. We also show that if a high performance computing cluster of nodes (with NVLink high-speed GPU interconnect (Foley & Danskin, 2017)) is used, the communication cost can be further reduced.
|
{
"annotation": [
"Content_addition",
"Rewriting_light"
],
"instruction": "",
"annotator": "annotator_06"
}
|
{
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_08"
}
|
Iw0CmVAYR5
|
JQTOJMtn3t
| 0
|
[
{
"text": "Quantization effect In Appendix 4.6, we also study how the performance of DAT is robust to gradient quantization."
},
{
"text": "We find that when the number of bits is reduced from 32 to 8 , the resulting TA and RA becomes slightly worse than the best 32 -bit case."
},
{
"text": "For example, in the worst case of CIFAR-10, TA drops 0 ."
},
{
"text": "91% and 6 ."
},
{
"text": "33% for DAT-PGD and DAT-FGSM, respectively."
},
{
"text": "And RA drops 4 ."
},
{
"text": "73% and 5 ."
},
{
"text": "22% , respectively. "
},
{
"text": "However, the use of quantization reduces the amount of data transmission per iteration."
},
{
"text": "We also show that if a high performance computing cluster of nodes (with NVLink high-speed GPU interconnect (Foley & Danskin, 2017)) is used, the communication cost can be further reduced."
}
] |
[
{
"text": "Quantization effect In Appendix 4.6, we also study how the performance of DAT is affected by gradient quantization."
},
{
"text": "We find that when the number of bits is reduced from 32 to 8 , the resulting TA and RA becomes worse than the best 32 -bit case."
},
{
"text": "For example, in the worst case (8-bit 2-sided quantization) of CIFAR-10, TA drops 1 ."
},
{
"text": "52% and 6 ."
},
{
"text": "32% for DAT-PGD and DAT-FGSM, respectively."
},
{
"text": "And RA drops 4 ."
},
{
"text": "74% and 5 ."
},
{
"text": "58% , respectively. Note that our main communication configuration is given by Ring-AllReduce that calls for 1-sided (rather than 2-sided) quantization. We also observe that DAT-FGSM is more sensitive to effect of gradient quantization than DAT-PGD. Even in the centralized setting, the use of 8-bit quantization can lead to a non-trivial drop in TA (see Table A5)."
},
{
"text": "However, the use of quantization reduces the amount of data transmission per iteration."
},
{
"text": "We also show that if a high performance computing cluster of nodes (with NVLink high-speed GPU interconnect (Foley & Danskin, 2017)) is used, the communication cost can be further reduced."
}
] |
hegI87bI5S.fL6Q48sfx8.14
|
There was a main effect in Position, and Position = Inside has a longer movement time than Position = Outside . We observed a significant interaction of I × Position . At I = 0 , the movement time increased compared to the condition no notch by approxi- mately 11.8% in Position = Inside , and approximately 4.93% in Position = Outside .
|
Another effect was observed, in that Position = Inside had a longer movement time than that for Position = Outside . We observed a significant interaction of I × Position . At I = 0 , the movement time increased compared to that for the condition of no notch by approximately 11.8% in Position = Inside , and by approximately 4.93% in Position = Outside .
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Modify the first sentence",
"annotator": "annotator_10"
}
|
{
"annotation": [
"Rewriting_light"
],
"instruction": "Revise this text to make it more readable and direct.",
"annotator": "annotator_07"
}
|
hegI87bI5S
|
fL6Q48sfx8
| 14
|
[
{
"text": "There was a main effect in Position, and Position ="
},
{
"text": " Inside has a longer movement time than Position ="
},
{
"text": "Outside ."
},
{
"text": "We observed a significant interaction of I × Position ."
},
{
"text": "At I = 0 , the movement time increased compared to the condition no notch by approxi- mately 11.8% in Position ="
},
{
"text": "Inside , and approximately 4.93% in Position = Outside ."
}
] |
[
{
"text": "Another effect was observed, in that Position"
},
{
"text": "= Inside had a longer movement time than that for Position ="
},
{
"text": "Outside ."
},
{
"text": "We observed a significant interaction of I × Position ."
},
{
"text": "At I = 0 , the movement time increased compared to that for the condition of no notch by approximately 11.8% in Position ="
},
{
"text": "Inside , and by approximately 4.93% in Position = Outside ."
}
] |
r1aSglP6b.Bk3yZFv6-.00
|
• The shape of the loss function after spontaneous symmetry breaking has the same shape observed by Goodfellow et al. (2014) towards the end of training, see Figure 1. • A cyclical learning rate (Smith & Topin, 2017) helps to get to the new minimum faster, see Section 2.5. • Stochasticity in gradient descent juggles the loss function such that the weights are no longer at the local maximum of Figure 1. A gradient descent step is taken to further take the weights towards the local minimum. Stochasticity helps the network to generalize better. • When the learning rate is too small to move away from A in Figure 1. Non-linearities move the weight away from A , this corresponds to breaking the symmetry explicitly in Theorem 1. PReLU’s (He et al., 2015b)performance could be related to the optimization of this process. • Results from Shwartz-Ziv & Tishby (2017) are due to spontaneous symmetry breaking, see Section 4. • Identity mapping outperforms other skip connections (He et al., 2016) is a result of the residual unit’s output being small. Then the residual units can be decoupled leading to a small λ and so it is easier for spontaneous symmetry breaking to occur, from m 2 = − µ 2 + 14 λη 2 . • Skip connection across residual units breaks additional symmetry. Suppose now an identity skip connection connects x 1 and the output of F 2 . Now perform a symmetry transformation on x 1 and x 2 , Q 1 and Q 2 ∈ G , respectively. Then the output after two residual untis is Q x = Q 1 x 1 + Q 2 x 2 + Q 2 F 2 . Neither Q = Q 1 nor Q = Q 2 can satisfies the covariance under G . This is observed by Orhan & Pitkow (2017). • The shattered gradient problem (Balduzzi et al., 2017). It is observed that the gradient in deep (non-residual) networks is very close to white noise. This is reflected in the exponential in Equation (6). This effect on ResNet is reduced because of the decoupling limit λ → 0 . This leads to the weight eigenvalues m 2 being larger in non-residual networks owing to m 2 = − µ 2 + 14 λη 2 . And so a higher oscillation frequency in the correlation function. • In recurrent neural networks, multiplicative gating (Yuhuai et al., 2016) combines the input x and the hidden state h by an element-wise product. Their method outperforms the method with an addition x + h because the multiplication gating breaks the covariance of the output. A transformation Q x ∗ Q h (cid:54) = Q ( x ∗ h ) , whereas for addition the output remains covariant Q x + Q h = Q ( x + h ) .
|
• The shape of the loss function after spontaneous symmetry breaking has the same shape observed by Goodfellow et al. (2014) towards the end of training, see Figure 1. • The training error typically drops drastically when learning rate is decreased. This occurs when the learning rate drops below η c , forcing a phase transition so that new minima develop. See Figure 1. • A cyclical learning rate (Smith & Topin, 2017) helps to get to the new minimum faster, see Section 2.5. • Stochasticity in gradient descent juggles the loss function such that the weights are no longer at the local maximum of Figure 1. A gradient descent step is taken to further take the weights towards the local minimum. Stochasticity helps the network to generalize better. • When the learning rate is too small to move away from A in Figure 1. PReLU’s (He et al., 2015b) could move the weight away from A through the training of the non-linearity. This corresponds to breaking the symmetry explicitly in Theorem 1. • Results from Shwartz-Ziv & Tishby (2017) are due to spontaneous symmetry breaking, see Section 4. • Identity mapping outperforms other skip connections (He et al., 2016) is a result of the residual unit’s output being small. Then the residual units can be decoupled leading to a small λ and so it is easier for spontaneous symmetry breaking to occur, from m 2 = − µ 2 + 14 λη 2 . • Skip connection across residual units breaks additional symmetry. Suppose now an identity skip connection connects x 1 and the output of F 2 . Now perform a symmetry transformation on x 1 and x 2 , Q 1 and Q 2 ∈ G , respectively. Then the output after two residual untis is Q x = Q 1 x 1 + Q 2 x 2 + Q 2 F 2 . Neither Q = Q 1 nor Q = Q 2 can satisfies the covariance under G . This is observed by Orhan & Pitkow (2017). • The shattered gradient problem (Balduzzi et al., 2017). It is observed that the gradient in deep (non-residual) networks is very close to white noise. This is reflected in the exponential in Equation (6). This effect on ResNet is reduced because of the decoupling limit λ → 0 . This leads to the weight eigenvalues m 2 being larger in non-residual networks owing to m 2 = − µ 2 + 14 λη 2 . And so a higher oscillation frequency in the correlation function. • In recurrent neural networks, multiplicative gating (Yuhuai et al., 2016) combines the input x and the hidden state h by an element-wise product. Their method outperforms the method with an addition x + h because the multiplication gating breaks the covariance of the output. A transformation Q x ∗ Q h (cid:54) = Q ( x ∗ h ) , whereas for addition the output remains covariant Q x + Q h = Q ( x + h ) .
|
{
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_02"
}
|
{
"annotation": [
"Content_addition",
"Rewriting_medium"
],
"instruction": "",
"annotator": "annotator_07"
}
|
r1aSglP6b
|
Bk3yZFv6-
| 0
|
[
{
"text": "• The shape of the loss function after spontaneous symmetry breaking has the same shape observed by Goodfellow et al."
},
{
"text": "(2014) towards the end of training, see Figure 1."
},
{
"text": ""
},
{
"text": ""
},
{
"text": ""
},
{
"text": "• A cyclical learning rate (Smith & Topin, 2017) helps to get to the new minimum faster, see Section 2.5."
},
{
"text": "• Stochasticity in gradient descent juggles the loss function such that the weights are no longer at the local maximum of Figure 1."
},
{
"text": "A gradient descent step is taken to further take the weights towards the local minimum."
},
{
"text": "Stochasticity helps the network to generalize better."
},
{
"text": "• When the learning rate is too small to move away from A in Figure 1."
},
{
"text": "Non-linearities move the weight away from A , this corresponds to breaking the symmetry explicitly in Theorem 1."
},
{
"text": "PReLU’s"
},
{
"text": "(He et al., 2015b)performance could be related to the optimization of this process."
},
{
"text": ""
},
{
"text": "• Results from Shwartz-Ziv & Tishby (2017) are due to spontaneous symmetry breaking, see Section 4."
},
{
"text": "• Identity mapping outperforms other skip connections (He et al., 2016) is a result of the residual unit’s output being small."
},
{
"text": "Then the residual units can be decoupled leading to a small λ and so it is easier for spontaneous symmetry breaking to occur, from m 2 = − µ 2 + 14 λη 2 ."
},
{
"text": "• Skip connection across residual units breaks additional symmetry."
},
{
"text": "Suppose now an identity skip connection connects x 1 and the output of F 2 ."
},
{
"text": "Now perform a symmetry transformation on x 1 and x 2 , Q 1 and Q 2 ∈ G , respectively."
},
{
"text": "Then the output after two residual untis is Q x"
},
{
"text": "="
},
{
"text": "Q 1 x 1 + Q 2 x 2 + Q 2 F 2 ."
},
{
"text": "Neither Q = Q 1 nor Q = Q 2 can satisfies the covariance under G ."
},
{
"text": "This is observed by Orhan & Pitkow (2017)."
},
{
"text": "• The shattered gradient problem (Balduzzi et al., 2017)."
},
{
"text": "It is observed that the gradient in deep (non-residual) networks is very close to white noise."
},
{
"text": "This is reflected in the exponential in Equation (6)."
},
{
"text": "This effect on ResNet is reduced because of the decoupling limit λ → 0 ."
},
{
"text": "This leads to the weight eigenvalues m 2 being larger in non-residual networks owing to m 2 = − µ 2 + 14 λη 2 ."
},
{
"text": "And so a higher oscillation frequency in the correlation function."
},
{
"text": "• In recurrent neural networks, multiplicative gating (Yuhuai et al., 2016) combines the input x and the hidden state h by an element-wise product."
},
{
"text": "Their method outperforms the method with an addition x + h because the multiplication gating breaks the covariance of the output."
},
{
"text": "A transformation Q x ∗ Q h (cid:54) ="
},
{
"text": "Q ( x ∗ h ) , whereas for addition the output remains covariant Q x + Q h"
},
{
"text": "= Q ( x + h ) ."
}
] |
[
{
"text": "• The shape of the loss function after spontaneous symmetry breaking has the same shape observed by Goodfellow et al."
},
{
"text": "(2014) towards the end of training, see Figure 1."
},
{
"text": "• The training error typically drops drastically when learning rate is decreased."
},
{
"text": "This occurs when the learning rate drops below η c , forcing a phase transition so that new minima develop."
},
{
"text": "See Figure 1."
},
{
"text": "• A cyclical learning rate (Smith & Topin, 2017) helps to get to the new minimum faster, see Section 2.5."
},
{
"text": "• Stochasticity in gradient descent juggles the loss function such that the weights are no longer at the local maximum of Figure 1."
},
{
"text": "A gradient descent step is taken to further take the weights towards the local minimum."
},
{
"text": "Stochasticity helps the network to generalize better."
},
{
"text": "• When the learning rate is too small to move away from A in Figure 1."
},
{
"text": ""
},
{
"text": "PReLU’s"
},
{
"text": "(He et al., 2015b) could move the weight away from A through the training of the non-linearity."
},
{
"text": "This corresponds to breaking the symmetry explicitly in Theorem 1."
},
{
"text": "• Results from Shwartz-Ziv & Tishby (2017) are due to spontaneous symmetry breaking, see Section 4."
},
{
"text": "• Identity mapping outperforms other skip connections (He et al., 2016) is a result of the residual unit’s output being small."
},
{
"text": "Then the residual units can be decoupled leading to a small λ and so it is easier for spontaneous symmetry breaking to occur, from m 2 = − µ 2 + 14 λη 2 ."
},
{
"text": "• Skip connection across residual units breaks additional symmetry."
},
{
"text": "Suppose now an identity skip connection connects x 1 and the output of F 2 ."
},
{
"text": "Now perform a symmetry transformation on x 1 and x 2 , Q 1 and Q 2 ∈ G , respectively."
},
{
"text": "Then the output after two residual untis is Q x"
},
{
"text": "="
},
{
"text": "Q 1 x 1 + Q 2 x 2 + Q 2 F 2 ."
},
{
"text": "Neither Q = Q 1 nor Q = Q 2 can satisfies the covariance under G ."
},
{
"text": "This is observed by Orhan & Pitkow (2017)."
},
{
"text": "• The shattered gradient problem (Balduzzi et al., 2017)."
},
{
"text": "It is observed that the gradient in deep (non-residual) networks is very close to white noise."
},
{
"text": "This is reflected in the exponential in Equation (6)."
},
{
"text": "This effect on ResNet is reduced because of the decoupling limit λ → 0 ."
},
{
"text": "This leads to the weight eigenvalues m 2 being larger in non-residual networks owing to m 2 = − µ 2 + 14 λη 2 ."
},
{
"text": "And so a higher oscillation frequency in the correlation function."
},
{
"text": "• In recurrent neural networks, multiplicative gating (Yuhuai et al., 2016) combines the input x and the hidden state h by an element-wise product."
},
{
"text": "Their method outperforms the method with an addition x + h because the multiplication gating breaks the covariance of the output."
},
{
"text": "A transformation Q x ∗ Q h (cid:54) ="
},
{
"text": "Q ( x ∗ h ) , whereas for addition the output remains covariant Q x + Q h"
},
{
"text": "= Q ( x + h ) ."
}
] |
BkxG1CvhWf.wcpE7maMLZ4.03
|
Practically, the existing methods to compute the recurrence diameter have a doubly exponential worst case running time (Kroening and Strichman 2003; Abdulaziz and Berger 2021), and they are only useful when applied to small abstractions in the context of compositionally computing upper bounds on other topological properties. Furthermore, there is not a compositional algorithm that can compute upper bounds on the recurrence diameter using abstractions’ recurrence diameters (Abdulaziz 2017)[Chapter 3, Theorem 2]. Accordingly, due to this absence of a practical way to compute it or tightly bound it, the recurrence diameter cannot be practically used as a completeness threshold.
|
Practically, the existing methods to compute the recurrence diameter have a doubly exponential worst case running time (Kroening and Strichman 2003; Abdulaziz and Berger 2021), and they are only useful when applied to small abstractions in the context of compositionally computing upper bounds on other topological properties. Furthermore, there is not a compositional algorithm that can compute upper bounds on the recurrence diameter using abstractions recurrence diameter. Accordingly, the recurrence diameter cannot be practically used as a completeness threshold due to the absence of a practical way to compute it or tightly bound it.
|
{
"annotation": [
"Concision",
"Rewriting_medium"
],
"instruction": "I do not need references in the in second sentence. Rephrase the last sentence.",
"annotator": "annotator_09"
}
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Remove the references in the second half of the paragraph. Reorder the last sentence to improve readability.",
"annotator": "annotator_07"
}
|
BkxG1CvhWf
|
wcpE7maMLZ4
| 3
|
[
{
"text": "Practically, the existing methods to compute the recurrence diameter have a doubly exponential worst case running time (Kroening and Strichman 2003; Abdulaziz and Berger 2021), and they are only useful when applied to small abstractions in the context of compositionally computing upper bounds on other topological properties."
},
{
"text": "Furthermore, there is not a compositional algorithm that can compute upper bounds on the recurrence diameter using abstractions’ recurrence diameters (Abdulaziz 2017)[Chapter 3, Theorem 2]."
},
{
"text": "Accordingly, due to this absence of a practical way to compute it or tightly bound it, the recurrence diameter cannot be practically used as a completeness threshold."
}
] |
[
{
"text": "Practically, the existing methods to compute the recurrence diameter have a doubly exponential worst case running time (Kroening and Strichman 2003; Abdulaziz and Berger 2021), and they are only useful when applied to small abstractions in the context of compositionally computing upper bounds on other topological properties."
},
{
"text": "Furthermore, there is not a compositional algorithm that can compute upper bounds on the recurrence diameter using abstractions recurrence diameter."
},
{
"text": "Accordingly, the recurrence diameter cannot be practically used as a completeness threshold due to the absence of a practical way to compute it or tightly bound it."
}
] |
OzYyHKPyj7.O9Mk1uqXra.00
|
To remedy this, some previous work has investigated the addition of differentiable stack data structures to RNNs (Sun et al., 1995; Grefenstette et al., 2015; Joulin & Mikolov, 2015; DuSell & Chiang, 2020). Just as adding a stack to a finite state machine, which makes it a pushdown automaton (PDA), enables it to recognize context-free languages (CFLs), the hope is that adding stacks to RNNs will increase the range of problems on which they can be used effectively. We also expect stacks to aid training by introducing an inductive bias for learning hierarchical patterns, and to increase generalization power by structuring the model’s memory in a way that better predicts held-out hierarchical data.
|
To remedy this, some previous work has investigated the addition of differentiable stack data structures to RNNs (Sun et al., 1995; Grefenstette et al., 2015; Joulin & Mikolov, 2015; DuSell & Chiang, 2020), which is closely related to work on neural networks that model shift-reduce parsers (Bowman et al., 2016; Dyer et al., 2016; Shen et al., 2019a). Just as adding a stack to a finite state machine, which makes it a pushdown automaton (PDA), enables it to recognize context-free languages (CFLs), the hope is that adding stacks to RNNs will increase the range of problems on which they can be used effectively. We also expect stacks to aid training by introducing an inductive bias for learning hierarchical patterns, and to increase generalization power by structuring the model’s memory in a way that better predicts held-out hierarchical data.
|
{
"annotation": [
"Development"
],
"instruction": "",
"annotator": "annotator_10"
}
|
{
"annotation": [
"Content_addition"
],
"instruction": "",
"annotator": "annotator_03"
}
|
OzYyHKPyj7
|
O9Mk1uqXra
| 0
|
[
{
"text": "To remedy this, some previous work has investigated the addition of differentiable stack data structures to RNNs (Sun et al., 1995; Grefenstette et al., 2015; Joulin & Mikolov, 2015; DuSell & Chiang, 2020)."
},
{
"text": "Just as adding a stack to a finite state machine, which makes it a pushdown automaton (PDA), enables it to recognize context-free languages (CFLs), the hope is that adding stacks to RNNs will increase the range of problems on which they can be used effectively."
},
{
"text": "We also expect stacks to aid training by introducing an inductive bias for learning hierarchical patterns, and to increase generalization power by structuring the model’s memory in a way that better predicts held-out hierarchical data."
}
] |
[
{
"text": "To remedy this, some previous work has investigated the addition of differentiable stack data structures to RNNs (Sun et al., 1995; Grefenstette et al., 2015; Joulin & Mikolov, 2015; DuSell & Chiang, 2020), which is closely related to work on neural networks that model shift-reduce parsers (Bowman et al., 2016; Dyer et al., 2016; Shen et al., 2019a)."
},
{
"text": "Just as adding a stack to a finite state machine, which makes it a pushdown automaton (PDA), enables it to recognize context-free languages (CFLs), the hope is that adding stacks to RNNs will increase the range of problems on which they can be used effectively."
},
{
"text": "We also expect stacks to aid training by introducing an inductive bias for learning hierarchical patterns, and to increase generalization power by structuring the model’s memory in a way that better predicts held-out hierarchical data."
}
] |
Byyb66j52G.hR5KKRfhQm.14
|
Delayed augmentation . We experiment on the generalization when we start to use augmentationlately as 10M, 20M. As shown in Figure 2(d) and Figure 2(e), the generalization rapidly increases after using augmentation at 10M and 20M. Although we use augmentation lately, the augmentation helps the generalization regardless of the usage timing. Golatkar et al . [9] shows that delayed augmentation cannot achieve as much as using augmentation during whole training in supervised-learning. However, (10, 25) improves the generalization comparable with (0, 25), which use augmentation throughouttraining, unlike supervised learning. However, when augmentation noticeably helps the training, suchas Figure 2(e), delayed augmentation struggles to follow earlier one in Figure 2(f), because the RLgradually improves the policy and trajectory by Markov property. Furthermore, RL has a limitednumber of samples unlike supervised learning, so using augmentation from the initial time is morecritical than supervised learning if augmentation helps the training.
|
Delayed augmentation . To determine when we start to use augmentation, we delayed its use until after 10M or 20M steps. The generalization rapidly increases after using augmentation at 10M and 20M (Figre 2(d), 2(e)). Although we impose augmentation late, the augmentation helps the generalization regardless of the start timing. In SL, delayed augmentation cannot achieve as much as using augmentation during whole training [9]. However, (10, 25) improves the generalization to be comparable with that of (0, 25), which use augmentation throughout training; this result differs from the case of supervised learning. However, when augmentation noticeably helps the training, the performance achieved using delayed augmentation may not catch up (Figure 2(e)) to the performance achieved using early augmentation (Figure 2(f)), because the RL gradually improves the policy and trajectory, as a result of its Markov property. Furthermore, the number of samples is limited for RL, but not for supervised learning, so using augmentation from the initial time is more critical than supervised learning if augmentation helps the training.
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Use clearer expression, use concise words.",
"annotator": "annotator_08"
}
|
{
"annotation": [
"Rewriting_medium"
],
"instruction": "Revise this paragraph to use clearer and more precise words.",
"annotator": "annotator_07"
}
|
Byyb66j52G
|
hR5KKRfhQm
| 14
|
[
{
"text": "Delayed augmentation ."
},
{
"text": "We experiment on the generalization when we start to use augmentationlately as 10M, 20M. As shown in Figure 2(d) and Figure 2(e), the generalization rapidly increases after using augmentation at 10M and 20M. Although we use augmentation lately, the augmentation helps the generalization regardless of the usage timing."
},
{
"text": "Golatkar et al ."
},
{
"text": "[9] shows that delayed augmentation cannot achieve as much as using augmentation during whole training in supervised-learning."
},
{
"text": "However, (10, 25) improves the generalization comparable with (0, 25), which use augmentation throughouttraining, unlike supervised learning."
},
{
"text": "However, when augmentation noticeably helps the training, suchas Figure 2(e), delayed augmentation struggles to follow earlier one in Figure 2(f), because the RLgradually improves the policy and trajectory by Markov property."
},
{
"text": "Furthermore, RL has a limitednumber of samples unlike supervised learning, so using augmentation from the initial time is morecritical than supervised learning if augmentation helps the training."
}
] |
[
{
"text": "Delayed augmentation ."
},
{
"text": "To determine when we start to use augmentation, we delayed its use until after 10M or 20M steps. The generalization rapidly increases after using augmentation at 10M and 20M (Figre 2(d), 2(e)). Although we impose augmentation late, the augmentation helps the generalization regardless of the start timing."
},
{
"text": ""
},
{
"text": "In SL, delayed augmentation cannot achieve as much as using augmentation during whole training [9]."
},
{
"text": "However, (10, 25) improves the generalization to be comparable with that of (0, 25), which use augmentation throughout training; this result differs from the case of supervised learning."
},
{
"text": "However, when augmentation noticeably helps the training, the performance achieved using delayed augmentation may not catch up (Figure 2(e)) to the performance achieved using early augmentation (Figure 2(f)), because the RL gradually improves the policy and trajectory, as a result of its Markov property."
},
{
"text": "Furthermore, the number of samples is limited for RL, but not for supervised learning, so using augmentation from the initial time is more critical than supervised learning if augmentation helps the training."
}
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.