You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: 03-estimation.qmd
+17-1Lines changed: 17 additions & 1 deletion
Original file line number
Diff line number
Diff line change
@@ -355,7 +355,7 @@ There are theoretical probability distributions that do not work with a standard
355
355
# In questions, point out the symmetry of the normal approximation to the confidence interval versus the asymmetry of the percentile-based confidence interval.
356
356
```
357
357
358
-
As you might remember from @sec-boot-approx, we simulate a sampling distribution if we bootstrap a statistic, for instance median candy weight in a sample bag. We can use this sampling distribution to construct a confidence interval. For example, we take the values separating the bottom 2.5% and the top 2.5% of all samples in the bootstrapped sampling distribution as the lower and upper limits of the 95% confidence interval. We will encounter the bootstrapping method for confidence intervals around regression coefficient of mediator again in chapter 11.
358
+
As you might remember from @sec-boot-approx, we simulate a sampling distribution if we bootstrap a statistic, for instance median candy weight in a sample bag. We can use this sampling distribution to construct a confidence interval. For example, we take the values separating the bottom 2.5% and the top 2.5% of all samples in the bootstrapped sampling distribution as the lower and upper limits of the 95% confidence interval. We will encounter the bootstrapping method for confidence intervals around regression coefficient of mediator again in @sec-mediation.
359
359
360
360
It is also possible to construct the entire sampling distribution in exact approaches to the sampling distribution. Both the standard error and percentiles can be used to create confidence intervals. This can be very demanding in terms of computer time, so exact approaches to the sampling distribution usually only report _p_ values (see @sec-pvalue), not confidence intervals.
361
361
@@ -373,8 +373,24 @@ It is also possible to construct the entire sampling distribution in exact appro
373
373
374
374
## Confidence Intervals in SPSS {#sec-SPSS-CI}
375
375
376
+
Now that you have learned about confidence intervals, it is useful to know how to add these confidence intervals to all analyses that you will execute in the future.
377
+
376
378
### Instruction
377
379
380
+
In the video below we will show you how to set confidence intervals in SPSS for several analyses. Think about t-tests (@sec-probmodels), ANOVA's (@sec-anova) and regression analyses (@sec-moderationcat, -@sec-moderationcont, -@sec-confounder, and -@sec-mediation). Since you have learned what confidence intervals are, and after this paragraph you also know how to implement the correct settings in SPSS we will ask of you to always report the confidence intervals of your results in addition to the test statistic, p-value and effect size. This provides a more complete picture of the research findings, we will elaborate more on this in the next Chapter.
381
+
382
+
When we execute a t-test the 95% confidence interval is already set as default option. We would only have to change this setting if we would like a confidence interval with a different confidence level (e.g., 90%). In @fig-interval-level we showcased the effect of the confidence level on the precision of the interval.
383
+
384
+
In the case of an ANOVA, the confidence interval is not necessary for the F-statistic. However if we conduct a post-hoc test we do automatically get a confidence interval of 95%. This time we can change the confidence interval by adjusting the significance level in the `Post Hoc` dialog. When we use General Linear Model, which we do for two-way ANOVA, we can find the significance level option under the `options` dialog.
385
+
386
+
When we conduct a regression analysis, we have to specifically ask for and turn on the option of confidence intervals. This can be done in the `Statistics` dialog. We have the opportunity to change the confidence level.
387
+
388
+
Lastly, when we want a confidence interval for a correlation coefficient we need to use the option `bootstrap`. Bootstrapping has been introduced in @sec-boot-approx. Be aware that in every analysis we have the opportunity to bootstrap the intervals, how to add bootstrapping is shown in this video: @vid-SPSSbootstrap1 in @sec-boot-spss.
389
+
390
+
_Note_ The confidence level is the relative width of the interval in terms of percentages, e.g. 90%, 95% or 99%. The confidence interval is the absolute width of the interval, thus the values belonging to the percentages, e.g. 4.3 to 5.6.
391
+
392
+
Watch the video below for the step by step instructions, details and additional information.
393
+
378
394
::: {#vid-SPSSconflevel}
379
395
380
396
{{< video https://www.youtube.com/embed/GoGrsHpfIWM
Copy file name to clipboardExpand all lines: 10-confounding.qmd
+16-2Lines changed: 16 additions & 2 deletions
Original file line number
Diff line number
Diff line change
@@ -59,7 +59,7 @@ It is important to note that the effect is only unique in comparison to the othe
59
59
60
60
If we include a confounder as a new independent variable in the model, the partial effects of other independent variables in the model change. In @fig-mediation-multipleregression, for instance, this happens if you add news site use to a model containing age as a predictor for newspaper reading time. The effects of other independent variables are adjusted to a new situation, namely a situation with news site use as a new independent variable. News site use helps to predict variation in the dependent variable, so the variation left to be explained by age changes. In @sec-confounders, we will learn that regression coefficients can increase and decrease if confounders are included in the model.
61
61
62
-
If we want to interpret regression coefficients as causal effects, for example, whether news site use causes people to spend less time on reading newspapers, we must ensure that there are no important confounders. We will discuss this in @sec-mediation (@sec-causalcriteria).
62
+
If we want to interpret regression coefficients as causal effects, for example, whether news site use causes people to spend less time on reading newspapers, we must ensure that there are no important confounders. We will discuss this in @sec-causalcriteria.
63
63
64
64
#### Confounders are not included in the regression model
65
65
@@ -294,7 +294,7 @@ To summarize the two types of confounders:
294
294
295
295
## Comparing Regression Models in SPSS {#sec-compmodelSSPSS}
We can detect confounders by adding each independent variable as a separate *Block* in a linear regression model (the *Linear* option in the *Regression* submenu). The SPSS output estimates a regression model for each block (@fig-confounderstable).
300
300
@@ -309,6 +309,20 @@ In the first model, interest in politics is the only predictor. One additional u
309
309
310
310
### Instructions
311
311
312
+
In the video below we will demonstrate how to identify confounders by comparing regression models in SPSS.
313
+
314
+
Let us imagine the situation where we are part of a political communication research team. Elections are coming up and we are interested in whether people are reading the news, getting information on the elections and campaigns, and which factors might be of influence there. Our main interest is the relationship between newspaper reading time and news site use. We will be adding education and age as possible confounders in our model. By adding them into the model, these possible confounders become covariates.
315
+
316
+
We will conduct what we call a _stepwise regression_. This means we will add the predictors to the SPSS model one by one, as we discussed in @sec-essanalconfounders. We perform a regression as always, through `analyse > regression > linear`. We add readingtime as the `dependent variable`. In the `independent(s)` window, we add newssite - click `next`, we add education - click `next` - and we add age. In the `Statistics` dialog, we click `confidence interval`, 95%. Then we run the analysis, please remember to first click `paste` and then run the analysis through the syntax file.
317
+
318
+
In the video the assumptions of regression analysis are not getting checked since this is not the focus of this chapter. Which is why we are not getting into those settings at the moment. Please refer to @sec-moderationcat and -@sec-moderationcont if you want to get into the assumptions.
319
+
320
+
In the output we can see the table _Variables Entered/Removed_, here we can see if we entered the variables in the correct order. In the next table, the _Model Summary_ we can collect details on the R and R-squared of the different models. R-squared is the explained variance. The table _Coefficients_ provides us with the other regression output; the unstandardized regression coefficient, the standard error, the standardized regression coefficient, the t-value, the p-value and the confidence interval.
321
+
322
+
In the _Coeffcients_ table we can compare the models to explore whether our added variables were confounders. Please note that we can only compare subsequent models. Meaning in this output we can compare model 2 to model 1 and we can compare model 3 to model 2. We are unable to compare model 3 to model 1. If we look at the results of model 3 compared to model 2, we see that the unstandardized regression coefficients of both news site use and education level (in years) move closer to zero. The unstandardized regression coefficient of News site use become less negative (the unstandardized regression coeffcient of -5.770 becomes -1.297) and the unstandardized regression coefficient of Education level becomes less positive (the unstandardized regression coefficient of .463 becomes .176). In other words, the individual effects of both news site use and education level become weaker once we add the variable age. Hence, based on comparing the results of model 3 to model 2, we can conclude that the predictor age (a covariate in model 3) was a reinforcing confounder in model 2.
323
+
324
+
Watch the video below for the step by step instructions, more details and additional information.
325
+
312
326
::: {#vid-SPSSregconfound}
313
327
314
328
{{< video https://www.youtube.com/embed/Du2mzxWifCM
0 commit comments