A Tektronix terminal emulator program, AKI, has been developed for utilizing a micro-computer (NEC PC-9801 Series) as a graphic terminal of main frames, large scale computers. The AKI can deal with the micro-computer as a graphic terminal as well as a character terminal unit. The AKI has a graphic editing ability such as zooming, panning and fundamental graphic editing. Also, we can use Chinese characters as a graphic information. Thus, the AKI may be an effective emulator for some package programs like foreign products such as SAS. The AKI was written by using Lattice C and MACRO Assembler.
Key words: AKI, EKAKU, Graohic editor, Panning, Tektronix emulator, ZoomingA Kanji graph is proposed as a graphical representation for multivariate data. This method is also compared, by some illustrations, with the dendrogram in cluster analysis, the Face graph and the plots of principal component scores. Advantages of this method is discussed about recovering the original data value and the visual point for clustering.
Key words: Graphical representation, Cluster analysis, Multivariate data, Kanji graph, Face graph
In order to test the hypothesis that the row and column effects are independent in a two way contingency table, usually chi-square tests are used. However, it is well known that the approximation by the chi-square test statistics is deteriorated in the cases of small samples or sparse contingency tables. Thus, it is desirable to apply the exact test in such cases.
In this paper, we introduce two algorithms for the exact test. One is a network algorithm which efficiently enumerates all contingency tables with the fixed marginal frequencies, the other is a method to count all the tables with the fixed margins one by one. The performance of network algorithm is compared with another on the basis of CPU time used to calculate the exact probability. As a result, it was found that the network algorithm was superior to another one from aspects of the calculation efficiency. Further, we confirmed that the number of zero cells affected on the difference of performances between the exact and the chi-square tests. Consequently, we recommend to use the exact test for large or moderate contingency tables' with many zero cells.
Computation of exact distribution function of weighted sum of chi-square variables with one or even degrees of freedom is considered. The method is based of Inversion formula of characteristic function. We applied the method to the computation of the distribution function of cumulative chi-square statistics for which chi-square distributions were fitted until now.
Key words: Cumulative chi-square statistic, Cumulative distribution function, Integral in complex domain, Residue theorem
In this paper, we consider a problem to compare several treatments based on relatively short time temporal observations. It is important for such comparison to take account of the characteristics of response, the property of distribution of the temporal observations, the completeness of those observations and so on. Among ordinary univariate analyses with continuons response, the analysis of variance(ANOVA)of repeated measurements has been most widely used. In the ANOVA, the independency, the homoscedasticity and the normality of the temporal observations are usually assumed. Randomization analysis of response curves has been proposed to lessen above restrictions in such parametric analysis as the ANOVA.
Here, we evaluate some performances and features of the randomization analysis of response curves in contrast with the ANOVA of repeated measurements. Assuming that individual temporal observation vector is distributed according to multivariate normal, we compare the two analyses by simulation. Then we evaluate 'the type I error rate in significance test for treatment effect and the power under three alternative hypotheses. As a result, when the assumption of the independency or the homoscedasticity was not satisfied, the power of the ANOVA fell with adjustment of the degrees of freedom, while the error rate of the ANOVA inflated without adjustment of it. Even in these circumstances the randomization analysis kept considerably high power and had stable and low error rate.
Least squares principles are employed in several methods to estimate the parameters of the weighted distance model for individual differences scaling. Some of them propose fitting the weighted distances to observed dissimilarity data, and other propose fitting them to transformed data of the observed dissimilarity data. A new method is developed for estimating the parameters of the model to squared dissimilarity variables. The bias component caused by the variable transformation is evaluated in the proposed procedure. The proposed method minimizes the loss function which is based on the weighted least squares error function. A quantity is introduced into this loss function to reduce a computing difficulty. An application to an artificial data was shown. And by a Monte Carlo simulation study, it was demonstrated that the proposed algorithm was effective in recovering the true group stimulus configuration. The problem on estimating the subject configuration is discussed.
Key words: Multidimensional scaling, Squared dissimilarity, Individual difference, Least squares method