In this paper, we reexamine the propriety of the crossover designs by applying them to clinical trials, focusing on the two-treatment, two-period crossover trials which test two treatments (or drugs) at two periods. Specifically, we present a pedigree of pros and cons to the crossover designs. Then, we call forth your attention to preparation of wash-out period between 1st and 2nd periods, invasion of treatment to patients' mental state (masking of treatment), temporal change of response level (indices to be evaluated) and balance in allocation of patients to treatment order groups, considering the problems due to carryover effects that are so to speak the Achilles' tendon of the crossover designs.
In case where carryover effects were detected at the analysis stage of the crossover trials, we tried the transformation of observed response as an expost facto remedy. We actually reanalyzed five sets of data from published literature using power-transformation and ACE-transformation. As a result, it was suggested that carryover effects were rarely eliminated by the data transformation. Thus, in the application of the crossover design, preexamination at the designing stages is especially important. Finally, we describe the fields of clinical trials where the crossover trials are used in relatively high frequency.
Clustering of n objects into k clusters under the within-class variance criterion can be considered as a combinatorial optimization problem. In this case, k-means method or its modified techniques are applied usually. Although these algorithms are computationally practicable and efficient, the solution is not warranted to converge to the global optimum. Recently, Kirkpatrick et al. (1983) introduced the concept of simulated annealing in a combinatorial optimization problem. The essential feature of this technique is its ablity to avoid being trapped in local rather than global optimum.
In this paper, we develop a clustering method based on the simulated annealing algorithm and evaluate its performance. In order to get the global optimal solution, a large amount of computation is required even if we use the simulated annealing. But if we have a computer which supports fully parallelism in hand, it is shown that this algorithm is highly effective.
In the previous paper, we presented a method for quantitatively evaluating clarity of computer manuals written in Japanese in order to improve their clarity. In particular, we proposed a method to extract clarity factors and to evaluate quantitatively those factors based only on surfaces of the manuals, and a modelling formulus for integrated evaluation of the clarity. In this paper, we consider meanings and problems of that formulus, and present a new evaluation method using the regression tree as a solution to those problems.
It is one of practical problems that, under the manual developing phase, a quantitative improvement guideline which corresponds to the descriptive factors may not be obtained at the right time. In the regression tree, if value of any one of descriptive factors is defined, one can estimate the range within which the values of the other factors have to be kept, in order to improve the clarity of a given manual. From the application of the regression tree to our problem our method was found to be useful to improve quality of the manuals.
Autocorrelation functions can be computed in several ways, using statistical software "S". But, we have experienced that times required to compute the autocorrelation function differ greatly depending on the adopted algorithms .
We tried to compare numerically computing times by for-loop algorithm analogous to popular FORTRAN program and those by vector-matrix algorithm incorpolated in "S". As a result, it was seen that computing time is remarkably decreased by the vector-multiplication algorithm.
Relatively few methods have been proposed, that could be used in the fitting of curves to data without external criterion, i.e, data with no clear distinction between objective and explanatory variables. Hastie and Stuetzle(1989) have proposed Principal Curves as an algorithm to fit curves to such data. The algorithm repeats alternately a Projection Step and an Expectation Step until a convergence condition is satisfied. But in the Projection Step, we must search the nearest neighbouring points on the curve from each of N data points, thus computational complexity of order N2 is necessary in the straightforward search.
In this paper, we propose an effective algorithm for the Projection Step, which untilizes a recursive binary-tree search. We evaluate the computational complexities of the straiforward and the refined algorithm for some special cases, and show efficiency of our algorithm on some examples.
Graphical methods play important role in statistical data analysis. However, many graphical methods only rely on our visual intuition and do not give statistics which can be theoretically investigated. In this paper, we present graphical methods based on linked line charts for many nonparametric problems. By the linked line chart methods, we can define several statistics of which distributions can be theoretically studied.
keywords: Graphical method, Nonparametric statistic, Association measure, Agreement measure, Goodness-of-fit, Symmetricity measure, k-sample problem