Data Analysis and Regression
This module demonstrates some of the capabilities of R for exploring
univariate properties of quantitative variables and relationships among
two or more such variables.
We begin by examining the Hipparcos dataset found at
hip = read.table("http://astrostatistics.psu.edu/datasets/HIP_star.dat",
Recall the variable names in this dataset using the
function. By using
we can automatically create temporary variables with these names (these variables
are not saved as part of the R session, and they are superseded by any other R objects
of the same names).
After using the attach command, we can obtain, say, individual
of the variables:
We can also use
on the entire 'hip' matrix.
Next, summarize some of this information graphically using a
showing median, quartile and
outliers for the four variables Vmag, pmRA, pmDE, and B.V (the last variable
used to be B-V, or B minus V, but R does not allow certain characters). These
are the 2nd, 6th, 7th, and 9th columns of 'hip'.
Our first attempt looks bad due to different scales of
the variables, so we construct an array of four single-variable plots:
for(i in c(2,6,7,9))
command does more than produce plots; it also returns output that can be
more closely examined. Below, we use the same command as earlier, but we produce the
plot and save the output.
'b' is an object called a list. To understand its contents, read the help for
boxplot. Suppose we wish to see
all of the outliers in the pmRA variable, which is the second of the four variables
in the current boxplot:
Next, we'll make a more elaborate boxplot. Suppose we wish to examine the values of Vmag, with objects
broken into categories according to the B.V variable:
notch=T, varwidth=T, las=1, tcl=.5,
xlab=expression("B minus V"),
main="Can you find the red giants?",
cex=1, cex.lab=1.4, cex.axis=.8, cex.main=1)
axis(2, labels=F, at=0:12, tcl=-.25)
The notches in the boxes, produced using "notch=T", can be used to test for differences in
the medians (see
for details). With "varwidth=T", the box widths are proportional to the square roots of
the sample sizes. The "cex" options all give scaling factors, relative to default:
"cex" is for plotting text and symbols, "cex.axis" is for axis annotation,
"cex.lab" is for the x and y labels, and
"cex.main" is for main titles.
commands are used to add an axis to the current plot. The first such command
above adds smaller tick marks at all integers, whereas the second one adds the
axis on the right.
in the plot above is telling us something about the bivariate relationship
between the two variables. Yet it is probably easier to grasp this relationship
by producing a
The above plot is a bit busy because of the default plotting character,
set let's use a different one:
Let's now produce the same graph but selecting just for Hyades stars. This open cluster
should be concentrated both in the sky coordinates RA and DE, and also in the proper
motion variables pm_RA and pm_DE. We start by noticing a concentration of stars in the
x1= (RA>50 & RA<100)
Next, we select in the proper motions. (As our cuts through the data
are parallel to the axes, this variable-by-variable classification approach
is sometimes called Classification and Regression
Trees or CART, a very common multivariate classification procedure.)
x2=(pmRA>90 & pmRA<130)
x3=(pmDE>-60 & pmDE< -10) # Space in '< -' is necessary!
After our selection,
we replot the HR diagram of Vmag vs. B.V. This shows the Zero Age Main Sequence, plus four
red giants, with great precision. Outliers in the original Hipparcos dataset have
been effectively removed. However, let's have a final look at the stars we have
identified using the
command to produce all bivariate plots for pairs of variables.
We'll exclude the first and fifth columns (the HIP identifying number
and the parallax, which is already known to lie in a narrow band).
Note that indexing a matrix or vector using negative integers has the effect of
excluding the corresponding entries.
We see that there is one outlying star in the DE variable, indicating that this
star is not actually one of the Hyades, and one outlying star in the e_Plx variable,
indicating that its measurements are not reliable. We exclude both points:
x5=x4 & (DE>0) & (e_Plx<5)
Note that the x5 variable, a vector of TRUE and FALSE, may be summed to reveal the
number of TRUE's.
As a final look at these data, let's consider the original plot of Vmag versus B.V
but make the 92 stars we just identified look bigger (pch=20 instead of 46) and
color them red (col=2 instead of 1):
Consider the relationship between DE and pmDE among the 92 stars identified above:
If we wish to fit a linear regression (least squares) line to this plot, we may do
so using the
(linear model) function. Note the "response ~ predictor(s)" format used
in formulas for functions like
m=lm(pmDE[x5] ~ DE[x5])
The m object just created is an object of class lm. The class of
an object in R can help to determine how it is treated by functions such as
m # same as print(m)
There is a lot of information contained in m that is not displayed by
The residual plot produced above reveals no irregularities. Note that when referring
to the items in a list by name, as in m$fit or m$resid above, it is only necessary
to type enough of the name to uniquely identify it.
We now look at a different dataset, the SDSS quasar dataset. Download this dataset from
We won't need all of these columns. Let's keep only a subset.
Creating all bivariate plots using
would take a bit of time with this size dataset; let's
do some exploration using a subset of 2000 rows, which we may
obtain using the
There appear to be some strange outliers in columns like Radio and X.ray. It seems
that values of 0, -1, and -9 are intended to signify missing data.
Let's remove all 0's and -1's and -9's
by changing them to NA, then use
to create new temporary variables with the same names as the columns and redo
the bivariate scatterplots. Remember,
any R objects with the same names as one of the columns will override these
quas[quas==0 | quas==-1 | quas==-9]=NA
Several of these plots look interesting. Let's take a look at the
relationship between z and M_i (using all 46420 points):
Clearly a straight line does not describe this relationship well. How about
a quadratic relationship? Note: fitting a quadratic curve to data is still
linear regression. This is because we are modelling the response (M_i) using a linear
combination of two variables (z and z2).
Note that we used the
function to create a sequence of x-axis (predictor) values to plot.
We also used the
function to obtain the y-axis values predicted by model m2 for the
point in the sequence x.
Finally, we used the
function to add line segments to the current plot (in order, in
Other types of
Let's try a non-linear fit, given by
Let's consider another bivariate relationship:
It is possible to apply weighted least squares. For instance, if the variances
of the response variables are known, a typical weight is the reciprocal of variance.
in this example, the difference is not large:
We can also try resistant regression, using the
function in the MASS package.
Finally, let's use least absolute deviation regression. To do this, we'll
use a function called
in the "quantreg" package. This package is
not part of the standard distribution of R, so we'll need to download it.
In order to do this, we must tell R where to store the installed library
Assuming the quantreg package is loaded, we may now compare the least-squares fit (red)
with the least absolute deviations fit (green). In this example, the two
fits are nearly identical: