7577: Auto-correction

内存限制:128 MB 时间限制:10 S 标准输入输出
题目类型:传统 评测方式:文本比较 上传者:
提交:0 通过:0

题目描述

It is preferrable to read the pdf statment.

Cuber QQ is poor in English writing, and in the process of preparing this contest, he realized that he is making too many grammar mistakes that an auto-correction engine is needed. Instead of using online tools like ''Microsoft Aim Writing'' or ''Grammarly'', he was interested in building a new engine on his own.

In particular, he adopted a naive sequence-to-sequence model, that takes a sequence, which is usually a sentence, and predict for each token, which is usually a word or character, whether there is something wrong with it, and if yes, what it should be replaced with. Here are several examples:


  • In ''Cuber QQ was one of the admirers Quber CC.'', ''admirers'' should be replaced with ''admirers of''.

  • In ''Cuber QQ confess his love to Cuber QQ just now.'', ''confess'' should be replaced with ''confessed''.

  • In ''Quber CC said that they are being and always will be good friends.'', ''are being'' should be replaced with ''are''.



You might notice that, in this sequence-to-sequence model, the phrase to replace should be at least one token, and the target should be at least one token too. This is related to the architecture and training approach of his model. We will not go into too many machine learning details here, as it will make the statement tedious. The problem is however, the training data does not conform with such format. In the training data, a sequence with flaws can be annotated with three types of annotations: add, delete and replace. Concretely,


  • s1 s2 ⋯ sv: to add sequence s before position l.

  • r: to delete from l-th token to the r-th token, inclusive.

  • s1 s2 ⋯ sv: to replace sub-sequence from l-th token to r-th token, inclusive, with sequence s.



All the annotations are applied directly to the original sequence, i.e., the indices like l and r refers to the original indices, instead of the indices after modification.

As ''add'' and ''delete'' will not be supported in the model, the preprocessing step needs to rewrite all ''add'' and ''delete'' with ''replace''. Furthermore, as there are many ways to achieve such goal, Cuber QQ wants to find the cheapest way, i.e., after the annotation rewriting, the total number of replaced tokens should be as minimum as possible. If there is a tie, the number of annotation records should be as minimum as possible. In case there is still a tie, any one of them is acceptable.

输入格式

The input starts with an integer t (1≤T≤50 000), denoting the number of test cases.

For each test case, the first line contains two space-separated integers n and q (1≤n,q≤2 000), where n is the number of tokens in the original sequence, and q is the number of original annotations.

In the next line, n integers a1,a2,…,an (1≤ai≤n) are presented, denoting the sequence.

The i-th of the following q lines is in one of the 3 formats:


  • li si,1 si,2 ⋯ si,vi (1≤li≤n+1, 1≤si,k≤n). Notably, when li=n+1, it is to add tokens at the end of sequence.

  • li ri (1≤li≤ri≤n).

  • li ri si,1 si,2 ⋯ si,vi (1≤li≤ri≤n, 1≤si,k≤n).



It is guaranteed that li≤li+1 for all 1≤i<n, and li=li+1 only happens when i is A and i+1 is not, which means, there is at most one ''add'' at the same position. The annotations are non-overlapping, i.e., ri≤li+1 for all 1≤i<n if ri is available for i. Furthermore, the corrected sequence after applying all the annotations is not empty.

It is guaranteed that for each test case, the corrected sequence is neither empty nor longer than 4 000. The sum of n and the total length of corrected sequences both do not exceed 50 000.

输出格式

Output
For each test case, in the first line output two space-separated integers: minimum number of tokens that will be replaced, x, and minimum number of converted annotations, y.
In the following y lines, you can output the annotations in any order. R should be omitted as it is the only type that is allowed. The annotations should be non-overlapping, non-empty and follow the exactly same format as input.

Note


For the first test case, [2,3] is replaced with 1, [4,6] is replaced with 2,3, and the corrected sequence is 1,1,2,3. The optimal correction with only R is to replace [2,6] with 1,2,3.
For the second test case, the corrected sequence is 1,2,4,6,5. Although A and D cannot be used, if we merge the consecutive annotations, only 4 tokens need to be replaced.
In the third test case, we show that the corrected sequence can be longer than the original sequence, which is 2,1,2,3,4,5,5,6.

输入样例 复制

3
6 2
1 2 5 3 4 6
R 2 3 1
R 4 6 2 3
6 3
1 2 2 3 4 6
R 3 4 4
D 5 5
A 7 5
6 2
1 2 3 4 5 6
A 1 2
A 6 5

输出样例 复制

5 1
2 6 1 2 3
4 1
3 6 4 6 5
2 2
6 6 5 6
1 1 2 1

分类标签