Short answer: standard error
In details:
Let's say we wanna estimate the population mean from sample data.
It makes sense to expect the sample mean to be very close to population mean.
Furthermore, if we keep getting more new sample sets, we would expect the error of our estimate to be smaller.
Taking it one step further, if the means from those sample data set don't differ much, we'd expect the error of estimate to be smaller and vice versa.
So, roughly speaking, standard error of the mean estimate is
* proportional to how much each sample value deviates from sample mean (actually, square root of sum of square difference)
* inversely proportional to sample size (actually, square root of n-2)
How about the standard error of slope estimate in linear regression?
roughly speaking, standard error of the slope estimate is
* proportional to how much each regression predicted value deviates from observed value (actually, square root of sum of square difference)
* inversely proportional to sum of square difference of PREDICTOR.
Another words, wider range of your predictor, smaller your standard error .
It actually makes sense. Think about it. Slope = (y2-y1)/(x2-x1)
If y2 and/or y1 is off a little bit
* how much the slope is gonna change given x2 is far away from x1?
* how much the slope is gonna change given x2 is very close to x1?
So, if u are doing a controlled experiment, choose a wider range of predictors.
Closely related is t-statistics, which is defined as
Estimated value/ std error
With a given t-statistic, we can look up a corresponding p-value, which is the probability of obtaining the data if the null hypothesis is true.
95% confidence interval is about estimated value +- 2*std error
If the interval contains 0, you cannot reject the null hypothesis.