smooth.terms {mgcv} | R Documentation |
Smooth terms are specified in a gam
formula using s
, te
, ti
and t2
terms.
Various smooth classes are available, for different modelling tasks, and users can add smooth classes
(see user.defined.smooth
). What defines a smooth class is the basis used to represent
the smooth function and quadratic penalty (or multiple penalties) used to penalize
the basis coefficients in order to control the degree of smoothness. Smooth classes are
invoked directly by s
terms, or as building blocks for tensor product smoothing
via te
, ti
or t2
terms (only smooth classes with single penalties can be used in tensor products). The smooths
built into the mgcv
package are all based one way or another on low rank versions of splines. For the full rank
versions see Wahba (1990).
Note that smooths can be used rather flexibly in gam
models. In particular the linear predictor of the GAM can
depend on (a discrete approximation to) any linear functional of a smooth term, using by
variables and the
‘summation convention’ explained in linear.functional.terms
.
The single penalty built in smooth classes are summarized as follows
bs="tp"
. These are low rank isotropic smoothers of any number of covariates. By isotropic is
meant that rotation of the covariate co-ordinate system will not change the result of smoothing. By low rank is meant
that they have far fewer coefficients than there are data to smooth. They are reduced rank versions of the thin plate splines and use the thin plate spline penalty. They are the default
smooth for s
terms because there is a defined sense in which they are the optimal smoother of any given
basis dimension/rank (Wood, 2003). Thin plate regression splines do not have ‘knots’
(at least not in any conventional sense): a truncated eigen-decomposition is used to achieve the rank reduction. See tprs
for further details.
bs="ts"
is as "tp"
but with a modification to the smoothing penalty, so that the null space is also penalized slightly and the
whole term can therefore be shrunk to zero.
bs="ds"
. These generalize thin plate splines. In particular, for any given number of covariates they
allow lower orders of derivative in the penalty than thin plate splines (and hence a smaller null space).
See Duchon.spline
for further details.
bs="cr"
.
These have a cubic spline basis defined by a modest sized
set of knots spread evenly through the
covariate values. They are penalized by the conventional intergrated square second derivative cubic spline penalty.
For details see cubic.regression.spline
and e.g. Wood (2006a).
bs="cs"
specifies a shrinkage version of "cr"
.
bs="cc"
specifies a cyclic cubic regression splines (see cyclic.cubic.spline). i.e. a penalized cubic regression splines whose ends match, up to second
derivative.
bs="sos"
.
These are two dimensional splines on a sphere. Arguments are latitude and longitude, and they are
the analogue of thin plate splines for the sphere. Useful for data sampled over a large portion of the globe,
when isotropy is appropriate. See Spherical.Spline
for details.
bs="ps"
.
These are P-splines as proposed by Eilers and Marx (1996). They combine a B-spline basis, with a discrete penalty
on the basis coefficients, and any sane combination of penalty and basis order is allowed. Although this penalty has no exact interpretation in terms of function shape, in the way that the derivative penalties do, P-splines perform almost as well as conventional splines in many standard applications, and can perform better in particular cases where it is advantageous to mix different orders of basis and penalty.
bs="cp"
gives a cyclic version of a P-spline (see cyclic.p.spline).
bs="re"
. These are parametric terms penalized by a ridge penalty (i.e. the identity matrix). When such a smooth has multiple arguments
then it represents the parametric interaction of these arguments, with the coefficients penalized by a ridge penalty. The ridge penalty is equivalent to an
assumption that the coefficients are i.i.d. normal random effects. See smooth.construct.re.smooth.spec
.
bs="mrf"
. These are popular when space is split up into discrete contiguous
geographic units (districts of a town, for example). In this case a simple smoothing penalty is constructed
based on the neighbourhood structure of the geographic units. See mrf
for details and an example.
bs="gp"
. Gaussian process models with a variety of simple correlation functions can be represented as smooths. See gp.smooth
for details.
bs="so"
(actually not single penaltied, but bs="sw"
and bs="sf"
allows splitting into single penalty components for use in tensor product smoothing). These are finite area smoothers designed to smooth within complicated geographical boundaries, where the boundary matters (e.g. you do not want to smooth across boundary features). See soap
for details.
Broadly speaking the default penalized thin plate regression splines tend to give the best MSE performance, but they are slower to set up than the other bases. The knot based penalized cubic regression splines (with derivative based penalties) usually come next in MSE performance, with the P-splines doing just a little worse. However the P-splines are useful in non-standard situations.
All the preceding classes (and any user defined smooths with single penalties) may be used as marginal
bases for tensor product smooths specified via te
, ti
or t2
terms. Tensor
product smooths are smooth functions
of several variables where the basis is built up from tensor products of bases for smooths of fewer (usually one)
variable(s) (marginal bases). The multiple penalties for these smooths are produced automatically from the
penalties of the marginal smooths. Wood (2006b) and Wood, Scheipl and Faraway (2012), give the general recipe for these constructions.
te
smooths have one penalty per marginal basis, each of which is interpretable in a similar way to the marginal penalty from which it is derived. See Wood (2006b).
ti
smooths exclude the basis functions associated with the ‘main effects’ of the marginal smooths, plus interactions other than the highest order specified. These provide a stable an interpretable way of specifying models with main effects and interactions. For example if we are interested in linear predicto f1(x) + f2(z) + f3(x,z), we might use model formula y~s(x)+s(z)+ti(x,z)
or y~ti(x)+ti(z)+ti(x,z)
. A similar construction involving te
terms instead will be much less statsitically stable.
t2
uses an alternative tensor product construction that results in more penalties each having a simple non-overlapping structure allowing use with the gamm4
package. It is a natural generalization of the SS-ANOVA construction, but the penalties are a little harder to interpret. See Wood, Scheipl and Faraway (2012/13).
Tensor product smooths often perform better than isotropic smooths when the covariates of a smooth are not naturally
on the same scale, so that their relative scaling is arbitrary. For example, if smoothing with repect to time and
distance, an isotropic smoother will give very different results if the units are cm and minutes compared to if the units are metres and seconds: a tensor product smooth will give the same answer in both cases (see te
for an example of this). Note that te
terms are knot based, and the thin plate splines seem to offer no advantage over cubic or P-splines as marginal bases.
Some further specialist smoothers that are not suitable for use in tensor products are also available.
bs="ad"
Univariate and bivariate adaptive smooths are available (see adaptive.smooth
).
These are appropriate when the degree of smoothing should itself vary with the covariates to be smoothed, and the
data contain sufficient information to be able to estimate the appropriate variation. Because this flexibility is
achieved by splitting the penalty into several ‘basis penalties’ these terms are not suitable as components of tensor
product smooths, and are not supported by gamm
.
bs="fs"
Smooth factor interactions are often produced using by
variables (see gam.models
), but a special smoother
class (see factor.smooth.interaction
) is available for the case in which a smooth is required at each of a large number of factor levels (for example a smooth for each patient in a study), and each smooth should have the same smoothing parameter. The "fs"
smoothers are set up to be efficient when used with gamm
, and have penalties on each null sapce component (i.e. they are fully ‘random effects’).
Simon Wood <simon.wood@r-project.org>
Eilers, P.H.C. and B.D. Marx (1996) Flexible Smoothing with B-splines and Penalties. Statistical Science, 11(2):89-121
Wahba (1990) Spline Models of Observational Data. SIAM
Wood, S.N. (2003) Thin plate regression splines. J.R.Statist.Soc.B 65(1):95-114
Wood, S.N. (2006a) Generalized Additive Models: an introduction with R, CRC
Wood, S.N. (2006b) Low rank scale invariant tensor product smooths for generalized additive mixed models. Biometrics 62(4):1025-1036
Wood S.N., F. Scheipl and J.J. Faraway (2013) Straightforward intermediate rank tensor product smoothing in mixed models. Statistical Computing. 23(3), 341-360. [online 2012]
s
, te
, t2
tprs
,Duchon.spline
,
cubic.regression.spline
,p.spline
, mrf
, soap
,
Spherical.Spline
, adaptive.smooth
, user.defined.smooth
,
smooth.construct.re.smooth.spec
, smooth.construct.gp.smooth.spec
,factor.smooth.interaction
## see examples for gam and gamm