SOME PROPERTIES ABOUT SMOOTHING, ROUGHEN THE VALUES OF THE INDEX ATTRIBUTE ON THE DECISION BLOCK

: The report proposed and demonstrated some properties about smoothing, roughen the value index attributeon the decision block and on the slice of the classon the decision block have been smoothed or roughened then they will partial pullulate or classon the slice . From the results foundof the smoothing, roughening the condition equivalence class or pullulate or pullulate on the slice then the incremental recalculating these matrices when smoothing, roughin


I. INTRODUCTION
The study to search for decision laws on the decision table by assessing the measures of decision laws as well as incremental approaches, determining decision laws been studied by many groups of authors, such as in [8], … On the other hand, when the decision table is expanded into a decision block, then the study, proposing a model and algorithm to detect decision laws on the decision block has been studied by the authors as in [4], However, the proposed models and algorithms when smoothing and roughen the values of index attributes on the decision block have not been studied until now. of this paper is to study the some properties about smoothing, roughen the values of the condition index attribute or decision index attribute on the decision block and on the slice of the decision block.From the results found of the smoothing, roughening the condition equivalence class or decision equivalence class partial pullulate or pullulate on the slice then the incremental calculation of the support matrices on the slice will be simpler and therefore faster than recalculating these matrices when smoothing, roughing the values of the condition index attribute or decision index attribute.

II -THE BASIC CONCEPT
II. 1 The block, slice of the block Definition II.1 [1] Let R = (id; A 1 , A 2 ,..., A n ) is a finite set of elements, where id is non-empty finite index set, A i (i=1.. n) is the attribute. Each attribute A i (i=1.. n) there is a corresponding value domain dom(A i ). A block r on R, denoted r(R) consists of a finite number of elements that each e family of mappings from the index set id to the value domain of the attributes A i (i = 1.. n).
t  r (R)  t = { t i : id  dom (A i )} i=1.. n The block is denoted by r(R) or r(id; A 1 , A sometime without fear of confusion we simply denoted r. Definition II.2 [2], [3]

________________________________________________________________________________________________
The report proposed and demonstrated some properties about smoothing, roughen the values of the condition slice of the decisionblock. Every time the condition equivalence class or decision have been smoothed or roughened then they will partial pullulate or pullulate smoothing, rou smoothing, roughening the condition equivalence class or decision incremental calculation of the support matrices on the slice will be simpler se matrices when smoothing, roughing the values of the condition index attribute or decision index attribute , index attribute. _____________________________________________________________________________________________________ The study to search for decision laws on the decision table laws as well as incremental approaches, determining decision laws … has been studied by many groups of authors, such as in [7], … On the other hand, when the decision table is expanded into a decision block, then the study, proposing a rithm to detect decision laws on the decision block has been studied by the authors as in [4], [5], [6]. However, the proposed models and algorithms when smoothing and roughen the values of index attributes on the now.The purpose some properties about smoothing, roughen the values of the condition index attribute or decision index attribute on the decision block From the results found hing, roughening the condition equivalence class or decision equivalence class partial pullulate or pullulate on the slice then the incremental calculation of the support matrices on the slice will be simpler and therefore atrices when smoothing, roughing the values of the condition index attribute or is a finite set of elements, (i=1.. n) is the (i=1.. n) there is a corresponding ). A block r on R, denoted r(R) consists of a finite number of elements that each element is a family of mappings from the index set id to the value i=1.. n . , A 2 ,..., A n ), denoted r.
Let R = (id; A 1 , A 2 ,..., A n ), r(R) is a block over xid we denoted r(R x ) is a block with R A 2 ,..., A n ) such that: is called a slice of the block r(R) at point x sometimes we denotedr x .
Here, for simplicity we use symbols We call x (i) (x  id, i = 1..n) are the index attributes of the block scheme R = (id; A 1 ,A 2 ,...,A n ). II.2Information block DefinitionII.3 [4]:Let block scheme ris a block over R. Then, the information block is a fourelements IB = (U, A, V, f) with called space objects, A = We call f(u, x (i) ) is the value of the object u at the index attribute x (i) .
If V contains missing values in at least one index attribute x (i) A then we call IB is inadequate information block contrast IB is a complete information block, or simply IB is an information block. DefinitionII.4 [4]:Let block scheme is a block overR, r x is the slice of the block r at the point xid. Then the slice of the information block at x is a of four elementsIB x = (U, A x , V x , f x ) of r called space objects, A x = Here, for simplicity we use symbols: are the index attributes of the ).
Let block schemeR = (id; A 1 , A 2 , ... , A n ), the information block is a tuples of with Uis a set of objects of r is the value of the object u at the index ontains missing values in at least one index attribute is inadequate information block, In contrast IB is a complete information block, or simply IB is Let block schemeR = (id; A 1 , A 2 , ... , A n ), r is the slice of the block r at the point Then the slice of the information block at x is a tuples ) with U is a set of objects II.3 Relationships are indistinguishable DefinitionII.5 [5] Letinformation block IB = (U, A, V, f). Then for each index attribute set P A we define an equivalence relation, signIND(P) defined as follows: and called non-discriminatory relations: From the definition we have: RelationIND(P) divide U into equivalence classes,, constitutes a subdivision of U, sign U/IND(P) or simply U/P. With each u  U, the equivalence class contains u in relation IND(P), sign [u] P is defined as follows: [u] P = {v  U | (u,v)  IND(P)}. By this definition we see: two elements u,v U belonging to the same equivalence class if and only ifthey have the same value on every index attribute in P. DefinitionII.6 [5] Letinformation block IB = (U, A, V, f), P, Q A is the set of index attributes, U/P = {P 1 , P 2 ,…, P m }, U/Q = {Q 1 , Q 2 ,…, Q n } is the partition generated by P, Q respectively.Then we say partition by Q is more coarse than partition by P, or partitionby P is smoother than partition by Qif and only if: function UxA x V x satisfy:uU,x (i) A x we have:

Comment:
Let decision blockDB=(U,CD,V,f). Then, if id = {x}, the decision block DB degenerate into the decision table as known.
When studying the decision block,people want to find the decisive laws from there. In these decision laws, the conditional part corresponds to the conditional indexattribute, the conclusions will correspond to the decision index attributes. The decision laws found in the decision block are divided into two categories: i) The lawsare correct on the block.
ii) The laws are correct on each particular slice of the block. II.5 The decision laws DefinitionII.9 [5] Let decision blockDB=(U, CD),with U is the space of objects: C =, D =, and C x =, D x =, xid. Then: x xh D D , correspondingly, the partitions are generated by C, C x , D, D x . A decision law on a block is denoted by: and on the slice at point x is denoted by: PropositionII.1 [5] Let decision blockDB=(U, CD),with U is the space of objects: Then: C i  U/C ,D j  U/D we have:

DefinitionII.10[5]
Let decision blockDB=(U,CD), q{1,2,…,h x }, xid.Then, support, accuracy and coverage of decision lawC i D j on the block are: and for decision lawC xp D xq on the slice of the block at point x is: From this definition, we have: We can represent the measure of the decision laws on the block in the form of the following measurement matrices: With the decision laws on the slices of the blocks, we also have the same support, accuracy, and coverage matrix. DefinitionII.11 [5] Let decision blockDB=(U,CD), C i U/C, D j U/D is the conditional equivalence class and decision equivalence class generated by C, Dcorresponding,C i D j is the decision lawon the block DB, i =1..m, j=1..k. - Let decision blockDB=(U, CD),with U is the space of objects: DefinitionII.12 [5] Let decision blockDB=(U,CD), C i U/C, D j U/D, i =1..m, j=1..kis the conditional equivalence class and decision equivalence class generated by C,Dcorresponding;, are two given thresholds (, (0,1)). IfAcc(C i ,D j ) andCov(C i ,D j )  then we callC i D j is the decision lawmeaning.
DefinitionII.13 [5] Let decision blockDB=(U,CD,V,f),with U is the space of objects, aCD, V a is the set of existing values of the index attribute a. Suppose Z={x s U| f(x s ,a) = z} is the set of objects whose z value is on the index attribute a. If Z is partitioned into two sets W and Y such that: Z=WY, C xi U/C x satisfy: C s  C xi . On the other hand, by C s smoothed into two conditionalequivalents classesC p and C q so according to theorem I.2 we have: C s = C p  C q  C p , C q  C xi with f(C p ,a)=w, f(C q ,a)=y. Finally, we assign each element u C xi \ C s at the index attribute a either w or y thenwe have a subdivision of C xi into two new conditionalequivalents classesC xi' and C xi'' satisfy: f(C xi' ,a)=w, f(C xi'' ,a)=yandC xi = C xi'  C xi'' . The result is on the slice r x thentheconditionalequivalent class C xi satisfy: C s  C xi , also smoothed into two conditionalequivalents classesC xi' and C xi'' satisfy: C p C xi', C q C xi'' (f(C xi' ,a)=w, f(C xi'' ,a)=y)andC xi = C xi'  C xi'' . From this result we see: row corresponding to theconditionalequivalence classC xi in the support matrix for slicer x will be split into two new lines corresponding to two new conditionalequivalents classesC xi' andC xi'' . Therefore, to calculate the value of the elements of these two new rows in the support matrix with slice r x thenwe first calculate the values Sup(C xi , D xj ) with j=1,2,…,h x . From there, we infer the values Sup(C xi'' , D xj ) is the subtraction between Sup(C xi , D xj ) and Sup(C xi' , D xj ) with j=1,2,…,h x . We say on the slice r x thenthetwo conditionalequivalents classesC xi , C xj is made rough sympathetic into C xk by the roughening oftwo conditionalequivalents classesC p ,C q toC s .

Prove
Assumingwe have: C p , C q U/C, (f(C p ,a)=w, f(C q ,a)=y), applying the results of proposition I.1 we infer on the slicer x exists two conditionalequivalents classesC xi , C xj satisfy: C p C xi, C q C xj . From there we have: f(u,a)=w with uC p C xi  f(C xi ,a)=w, In the same way we also have: f(u,a)=y with uC q C xj f(C xj ,a)=y.
On the other hand, assuming we have: two conditionalequivalents classesC p ,C q U/Cis made rough into new conditionalequivalent classC s U/C, according to the results of theorem I.1 then we have:  a j  a, a j C: f(C p ,a j ) = f(C q ,a j )a j  a, a j C x : f(C p ,a j ) = f(C q ,a j )(1) In slices r x thenwe have: Same, we also have: From (1), (2) and (3) we infer: a j  a, a j C x : f(C xi ,a j ) = f(C xj ,a j ).
Therefore, apply the necessary and sufficient conditions in the statement of the theorem I.1, we havetwo conditionalequivalents classes C xi , C xj is made rough sympathetic into C xk by the roughening oftwo conditionalequivalents classesC p ,C q toC s . From the nature of the rough work two conditionalequivalents classesC xi , C xj toC xk we have: Suppose, if C p , C q is made rough into new conditionalequivalent class C s ,( f(C s ,a)=z )and on the slice r x two conditionalequivalents classesC xi , C xj (C p C xi, C q C xj )is made rough sympathetic intoC xk then: i) C xi  C xj = C xk ii) D xh U/D x : Sup(C xi ,D xh )+Sup(C xj ,D xh ) = Sup(C xk ,D xh ), vớih=1,2,…,h x . Prove i) Suppose we have: xC xi C xj  xC xi or xC xj . If xC xi thenfrom thetwo conditionalequivalents classesC xi , C xj is made rough into conditionalequivalent classC xk  f(x,a) = f(C xi ,a)=f(C xk ,a)=z. On the other hand, applying the results of theorem 2.1 we have a j  a: f(C xi ,a j ) = f(C xj ,a j )= f(C xk ,a j ) f(x,a j ) =f(C xi ,a j ) = f(C xj ,a j )= f(C xk ,a j )  xC xk . Totally similar, when x C xj we also prove that xC xk .
So inference: (C xi C xj )C xk . (5) On the contrary, suppose xC xk , because C xi and C xj is made rough into C xk applying the results of theorem 2.1 we have: a j  a: f(C xi ,a j )= f(C xj ,a j )= f(C xk ,a j )  f(x,a j ) =f(C xi ,a j ) = f(C xj ,a j ). On the other hand, becausexC xk  f(x,a)=z but z is made rough from w and y  f(x,a)=w or f(x,a)=y.
-If f(x,a)=w  f(x,a)=f(C xi ,a)= w  xC xi .
-If f(x,a)=y  f(x,a)=f(C xj ,a)= y  xC xj . So xC xi or xC xj  xC xi  C xj . Therefore, from xC xk  xC xi  C xj . So: C xk  (C xi  C xj ) (6) Combined (5) and (6) we have: C xi  C xj = C xk .
On the other hand: D xh U/D x : Sup(C xk ,D xh )=|C xk D xh | = |(C xi C xj ) D xh | = |(C xi D xh ) (C xj D xh )|. We have: C xi C xj = (C xi D xh ) (C xj D xh ) = .
So inference: D xh U/D x : Sup(C xi ,D xh )=Sup(C xi ,D xh| ) + Sup(C xj ,D xh ) with h=1,2,…,h x . Thus, we see two rows of matrix of support on the slice r x ,corresponding to the two conditionalequivalents classesC xi , C xj is combined into a new row corresponding to the conditionalequivalent class C xk . The value of each element of the new line corresponds to C xk is the total value of two elements of two lines corresponding to C xi andC xj .

III.2
Smoothing, rougheningthedecisionequivalenceclassesonthedecision block and ontheslice. PropositionIII We say on the slice r x thendecision equivalent classD xi is smoothed sympathetic partially into two new decisionequivalents classesD xi' and D xi'' by the smoothing of D s into two new decisionequivalents classesD p ,D q . Proving this clause is similar to the proof of the proposition II.1.

PropositionIII.6
Let decision blockDB=(U,CD), a=x (i) D, V a is the set of existing values of the decision index attribute a, the z value of a is smoothed to two new values w and y. From this result we see: columncorresponding to thedecisionequivalence classD xi in the support matrix for slice r x will be split into two new columns corresponding to two new decisionequivalents classesD xi' and D xi'' . Therefore, to calculate the value of the elements of these two new columns in the support matrix with slice r x then we first calculate the values Sup(C xj , D xi ) with j=1,2,…,t x . From there, we infer the values Sup(C xj , D xi'' ) is the subtraction between Sup(C xj , D xi ) and Sup(C xj , D xi' ) with j=1,2,…,t x . We say on the slice r x thentwo decisionequivalents classesD xi , D xj is made rough sympathetic intoD xk by the roughening of thetwo decision equivalents classesD p ,D q todecision equivalent classD s . Proving this clause is similar to the proof of the proposition II.3. Prove i) Suppose we have: uD xi D xj uD xi oruD xj . IfuD xi thenby two decision equivalence classes D xi , D xj is made rough intodecision equivalent classD xk f(u,a) = f(D xi ,a)=f(D xk ,a)=z. On the other hand, apply the results of the theorem 2.1 we havea r  a: f(D xi ,a r ) = f(D xj ,a r )= f(D xk ,a r ) f(u,a r ) =f(D xi ,a r ) = f(D xj ,a r )= f(D xk ,a r ) uD xk . Completely similar, ifuD xj then we also proved uD xk . So inference: (D xi D xj )D xk .
(7) On the contrary, suppose uD xk , because D xi andD xj is made rough intoD xk should apply the results of the theorem 2.1 we have: a r  a: f(D xi ,a r )= f(D xj ,a r )= f(D xk ,a r )  f(u,a r ) =f(D xi ,a r ) = f(D xj ,a r ). On the other hand, by uD xk  f(u,a)=z but z made rough from w and y  f(u,a)=w or f(u,a)=y.
-If f(u,a)=w  f(u,a)=f(D xi ,a)= w uD xi .
-If f(u,a)=y  f(u,a)=f(D xj ,a)= y uD xj . SouD xi oruD xj uD xi D xj . Therefore, fromuD xk uD xi D xj . So: D xk  (D xi D xj ) .
ii) Because D xi , D xj aredecision equivalence classes, so we have: D xi  D xj =.
On the other hand: C xh U/C x : Sup(C xh ,D xk )=|C xh  D xk | = |(D xi D xj ) C xh | = |(D xi C xh ) (D xj C xh )|. We have: D xi D xj = (D xi C xh ) (D xj C xh ) = .
Thus, we see two columns of the support matrix on the slicer x corresponds to two decision equivalence classes D xi , D xj is made rough sympathetic intoa new column corresponding to the decision equivalent class D xk . The value of each element of the new column corresponds to D xk is the total value of two elements of two columns corresponding to two decision equivalence classesD xi and D xj .

IV. CONCLUSIONS
From the initial results on the decision block, the paper proposes and demonstrates some of the results of the relationship between roughing, smoothing the values of conditional attributes or decisionsfor conditionalequivalents classesordecision equivalence classeson the decision blocks and on the slices. The smoothing of conditionalequivalents classes or decision equivalence classes on the decision blockshave a sympathetic partially the smoothing of conditionalequivalents classes or decision equivalence classesrespectively on the slice. The roughening of conditionalequivalents classes or decision equivalence classes on the decision blockshave a sympathetic the roughening of conditionalequivalents classes or decision equivalence classes on the slice. From these results, calculation of support matrix on the slicesame is define as the calculation of the support matrix on the block whenthe smoothing, roughening of conditionalequivalents classes or decision equivalence classes. In special cases, the index set id = {x},the information blocks degenerate into information systemsthenthese results coincide with the results reported by many authors for the information system. Onthebasis of theseresultswe can studythe reverse relationshipbetweenslices of information block withthat block itself, in case theobjects of theinformation block are changed...,someotherresultsmay be considered in individual cases of information blocks...,itaddsthetheoreticalresults of theexploitation of decision rules oninformation blocks.

V. ACKNOWLEDGEMENTS
Finally, the authors thank the teachers, leaders of the Faculty of Information Technology and the Management Board of the Hanoi Pedagogical University 2for creating favorable conditions for us to work and study.This research is funded by Hanoi Pedagogical University 2 (HPU2).