To do this you need to write a user fitting function. It is also not possible to use multiple built-in functions, such as kinetics with with liner drift. While Igor allows imposing numerical constraints on coefficients of built-in functions and even setting them tem to a fixed value, you cannot change how parameters are used in calculations. Such restrictions are known as constraints. In some cases it also helps to limit the range of acceptable values of parameters, such as limiting kinetic rates to positive values. Thus, in kinetic analysis start with the minimum number of exponents and advance to higher order if you are sure you cannot describe data adequately with the selected order. Conventional wisdom goes that one can fit any process with sufficient number of exponents, typically over four. It is always a good idea to use the minimum number of parameters needed to describe your process adequately. At this moment most data analyzed in this group involve only one meaningful independent variable – time –although other situations are possible, for example when both time and concentration change across a series.įrom visual, qualitative evaluation of data you need to estimate how many processes are involved and how many parameters you need to describe them. At the very least it is necessary to know general type of dependence (exponential, linear, titration, polynomial, mixed) and the number of independent variables. How to choose correct function?Īs described here, you must have reasonable understanding of the process you are analyzing. Such flexibility, however, takes its toll on accuracy of fitting and often in efforts it takes to converge such generic fit. Generic functions are defined in the most common form with maximum number of unrestricted parameters, which makes them useable with a broad range of processes. These typically include linear, polynomial, exponential and some other common dependencies. We can minimize this statistic actually.Any data fitting must have a function that describes the process you are analyzing.Īll fitting packages provide a set of generic built-in functions. Well, it is possible to use another strategy to find appropriate parameters. Does that mean that we cannot find one Gamma distribution that will be better than all possible lognormal distributions ? Better, for instance, according to Kolmogorov-Smirnov statistics… But here, we did consider only one distribution in each family. Here, we should prefer this lognormal distribution to that Gamma one. And one p-value is 72%, while the other one is 2.5%. The statistics is twice the one we have with our lognormal distribution. The Gamma distribution seems to be very far away from the true distribution. > ks.test(X,"pgamma",ab$estimate,ab$estimate)įrom a theoretical point of view, we should not look at the p-values, since the null-distribution is based on a fixed distribution, not a fitted one (see the Lilliefors tests for normal samples). This can be done using > ks.test(X,"plnorm",ms$estimate,ms$estimate) What else can we say ? actually, we can also compute Kolmogorov-Smirnov statistic, If we want to visualized those two distributions, let us use > vab=pgamma(u,ab$estimate,ab$estimate) X=exp(c(rnorm(50,1,1),rnorm(50,2,1.2)))Ĭonsider two standard families for positive random variables: a Gamma distribution and a lognormal distribution.
0 Comments
Leave a Reply. |