The Pretest-Posttest Control Group DesignThis designs takes on this form:
This design controls for all of the seven threats to validity described in detail so far. An explanation of how this design controls for these threats is below.
- History--this is controlled in that the general history events which may have contributed to the O1 and O2 effects would also produce the O3 and O4 effects. This is true only if the experiment is run in a specific manner--meaning that you may not test the treatment and control groups at different times and in vastly different settings as these differences may effect the results. Rather, you must test simultaneously the control and experimental groups. Intrasession history must also be taken into consideration. For example if the groups truly are run simultaneously, then there must be different experimenters involved, and the differences between the experimenters may contribute to effects.
A solution to history in this case is the randomization of experimental occasions--balanced in terms of experimenter, time of day, week and etc.
- Maturation and testing--these are controlled in that they are manifested equally in both treatment and control groups.
- Instrumentation--this is controlled where conditions control for intrasession history, especially where fixed tests are used. However when observers or interviewers are being used, there exists a potential for problems. If there are insufficient observers to be randomly assigned to experimental conditions, the care must be taken to keep the observers ignorant of the purpose of the experiment.
- Regression--this is controlled by the mean differences regardless of the extremety of scores or characteristics, if the treatment and control groups are randomly assigned from the same extreme pool. If this occurs, both groups will regress similarly, regardless of treatment.
- Selection--this is controlled by randomization.
- Mortality--this was said to be controlled in this design, however upon reading the text, it seems it may or may not be controlled for. Unless the mortality rate is equal in treatment and control groups, it is not possible to indicate with certainty that mortality did not contribute to the experiment results. Even when even mortality actually occurs, there remains a possibility of complex interactions which may make the effects drop-out rates differ between the two groups. Conditions between the two groups must remain similar--for example, if the treatment group must attend treatment session, then the control group must also attend sessions where either not treatment occurs, or a "placebo" treatment occurs. However even in this there remains possibilities of threats to validity. For example, even the presence of a "placebo" may contribute to an effect similar to the treatment, the placebo treatment must be somewhat believable and therefore may end up having similar results!
The factors described so far effect internal validity. These factors could produce changes which may be interpreted as the result of the treatment. These are called main effects which have been controlled in this design giving it internal validity.
However, in this design, there are threats to external validity (also called interaction effects because they involve the treatment and some other variable the interaction of which cause the threat to validity). It is important to note here that external validity or generalizability always turns out to involve extrapolation into a realm not represented in one's sample.
In contrast, internal validity are solvable within the limits of the logic of probability statistics. This means that we can control for internal validity based on probability statistics within the experiment conducted, however, external validity or generalizability can not logically occur because we can't logically extrapolate to different conditions. (Hume's truism that induction or generalization is never fully justified logically).
External threats include:
- Interaction of testing and X--because the interaction between taking a pretest and the treatment itself may effect the results of the experimental group, it is desirable to use a design which does not use a pretest.
- Interaction of selection and X--although selection is controlled for by randomly assigning subjects into experimental and control groups, there remains a possibility that the effects demonstrated hold true only for that population from which the experimental and control groups were selected. An example is a researcher trying to select schools to observe, however has been turned down by 9, and accepted by the 10th. The characteristics of the 10th school may be vastly different than the other 9, and therefore not representative of an average school. Therefore in any report, the researcher should describe the population studied as well as any populations which rejected the invitation.
- Reactive arrangements--this refers to the artificiality of the experimental setting and the subject's knowledge that he is participating in an experiment. This situation is unrepresentative of the school setting or any natural setting, and can seriously impact the experiment results. To remediate this problem, experiments should be incorporated as variants of the regular curricula, tests should be integrated into the normal testing routine, and treatment should be delivered by regular staff with individual students.
Research should be conducted in schools in this manner--ideas for research should originate with teachers or other school personnel. The designs for this research should be worked out with someone expert at research methodology, and the research itself carried out by those who came up with the research idea. Results should be analyzed by the expert, and then the final interpretation delivered by an intermediary.
Tests of significance for this design--although this design may be developed and conducted appropriately, statistical tests of significance are not always used appropriately.
- Wrong statistic in common use--many use a t-test by computing two ts, one for the pre-post difference in the experimental group and one for the pre-post difference of the control group. If the experimental t-test is statistically significant as opposed to the control group, the treatment is said to have an effect. However this does not take into consideration how "close" the t-test may really have been. A better procedure is to run a 2X2 ANOVA repeated measures, testing the pre-post difference as the within-subject factor, the group difference as the between-subject factor, and the interaction effect of both factors.
- Use of gain scores and covariance--the most used test is to compute pre-posttest gain scores for each group, and then to compute a t-test between the experimental and control groups on the gain scores. Also used are randomized "blocking" or "leveling" on pretest scores and the analysis of covariance are usually preferable to simple gain-score comparisons.
- Statistics for random assignment of intact classrooms to treatments--when intact classrooms have been assigned at random to treatments (as opposed to individuals being assigned to treatments), class means are used as the basic observations, and treatment effects are tested against variations in these means. A covariance analysis would use pretest means as the covariate.
The Soloman Four-Group DesignThe design is as:
In this design, subjects are randomly assigned to four different groups: experimental with both pre-posttests, experimental with no pretest, control with pre-posttests, and control without pretests. By using experimental and control groups with and without pretests, both the main effects of testing and the interaction of testing and the treatment are controlled. Therefore generalizability increases and the effect of X is replicated in four different ways.
Statistical tests for this design--a good way to test the results is to rule out the pretest as a "treatment" and treat the posttest scores with a 2X2 analysis of variance design-pretested against unpretested.
The Posttest-Only Control Group DesignThis design is as: This design can be though of as the last two groups in the Solomon 4-group design. And can be seen as controlling for testing as main effect and interaction, but unlike this design, it doesn't measure them. But the measurement of these effects isn't necessary to the central question of whether of not X did have an effect. This design is appropriate for times when pretests are not acceptable.
Statistical tests for this design--the most simple form would be the t-test. However covariance analysis and blocking on subject variables (prior grades, test scores, etc.) can be used which increase the power of the significance test similarly to what is provided by a pretest.
Some researchers downplay the importance of causal inference and assert the worth of understanding. This understanding includes "what," "how," and "why." However, is "why" considered a "cause and effect" relationship? If a question "why X happens" is asked and the answer is "Y happens," does it imply that "Y causes X"? If X and Y are correlated only, it does not address the question "why." Replacing "cause and effect" with "understanding" makes the conclusion confusing and misdirect researchers away from the issue of "internal validity."
Some researchers apply a phenomenological approach to "explanation." In this view, an explanation is applied to only a particular case in a particular time and place, and thus generalization is considered inappropriate. In fact, a particular explanation does not explain anything. For example, if one askes, "Why Alex Yu behaves in that way," the asnwer could be "because he is Alex Yu. He is a unqiue human being. He has a particular family background and a specific social circle." These "particular" statements are alway right, thereby misguide researchers away from the issue of external validity.
By Spike Brehm
This post has been cross-posted on VentureBeat.
The Single-Page App
Libraries like Backbone.js, Ember.js, and Angular.js are often referred to as client-side MVC (Model-View-Controller) or MVVM (Model-View-ViewModel) libraries. The typical client-side MVC architecture looks something like this:
This is great for the user because once the app is initially loaded, it can support quick navigation between pages without refreshing the page, and if done right, can even work offline.
This is great for the developer because the idealized single-page app has a clear separation of concerns between the client and the server, promoting a nice development workflow and preventing the need to share too much logic between the two, which are often written in different languages.
Trouble in Paradise
In practice, however, there are a few fatal flaws with this approach that prevent it from being right for many use cases.
An application that can only run in the client-side cannot serve HTML to crawlers, so it will have poor SEO by default. Web crawlers function by making a request to a web server and interpreting the result; but if the server returns a blank page, it’s not of much value. There are workarounds, but not without jumping through some hoops.
While the ideal case can lead to a nice, clean separation of concerns, inevitably some bits of application logic or view logic end up duplicated between client and server, often in different languages. Common examples are date and currency formatting, form validations, and routing logic. This makes maintenance a nightmare, especially for more complex apps.
Some developers, myself included, feel bitten by this approach — it’s often only after having invested the time and effort to build a single-page app that it becomes clear what the drawbacks are.
A Hybrid Approach
At the end of the day, we really want a hybrid of the new and old approaches: we want to serve fully-formed HTML from the server for performance and SEO, but we want the speed and flexibility of client-side application logic.
An isomorphic app might look like this, dubbed here “Client-server MVC”:
In this world, some of your application and view logic can be executed on both the server and the client. This opens up all sorts of doors — performance optimizations, better maintainability, SEO-by-default, and more stateful web apps.
We launched an isomorphic library of our own earlier this year. Called Rendr, it allows you to build a Backbone.js + Handlebars.js single-page app that can also be fully rendered on the server-side. Rendr is a product of our experience rebuilding the Airbnb mobile web app to drastically improve pageload times, which is especially important for users on high-latency mobile connections. Rendr strives to be a library rather than a framework, so it solves fewer of the problems for you compared to Mojito or Meteor, but it is easy to modify and extend.
Abstraction, Abstraction, Abstraction
That these projects tend to be large, full-stack web frameworks speaks to the difficulty of the problem. The client and server are very dissimilar environments, and so we must create a set of abstractions that decouple our application logic from the underlying implementations, so we can expose a single API to the application developer.
We want a single set of routes that map URI patterns to route handlers. Our route handlers need to be able to access HTTP headers, cookies, and URI information, and specify redirects without directly accessing window.location (browser) or req and res (Node.js).
Fetching and persisting data
We want to describe the resources needed to render a particular page or component independently from the fetching mechanism. The resource descriptor could be a simple URI pointing to a JSON endpoint, or for larger applications, it may be useful to encapsulate resources in models and collections and specify a model class and primary key, which at some point would get translated to a URI.
Whether we choose to directly manipulate the DOM, stick with string-based HTML templating, or opt for a UI component library with a DOM abstraction, we need to be able to generate markup isomorphically. We should be able to render any view on either the server or the client, dependent on the needs of our application.
Building and packaging
It turns out writing isomorphic application code is only half the battle. Tools like Grunt and Browserify are essential parts of the workflow to actually get the app up and running. There can be a number of build steps: compiling templates, including client-side dependencies, applying transforms, minification, etc. The simple case is to combine all application code, views and templates into a single bundle, but for larger apps, this can result in hundreds of kilobytes to download. A more advanced approach is to create dynamic bundles and introduce asset lazy-loading, however this quickly gets complicated. Static-analysis tools like Esprima can allow ambitious developers to attempt advanced optimization and metaprogramming to reduce boilerplate code.
Composing Together Small Modules
Being first to market with an isomorphic framework means you have to solve all these problems at once. But this leads to large, unwieldy frameworks that are hard to adopt and integrate into an already-existing app. As more developers tackle this problem, we’ll see an explosion of small, reusable modules that can be integrated together to build isomorphic apps.
To demonstrate this point, I’ve created a sample app called isomorphic-tutorial that you can check out on GitHub. By combining together a few modules, each that can be used isomorphically, it’s easy to create a simple isomorphic app in just a few hundred lines of code. It uses Director for server- and browser-based routing, Superagent for HTTP requests, and Handlebars.js for templating, all built on top of a basic Express.js app. Of course, as an app grows in complexity, one has to introduce more layers of abstraction, but my hope is that as more developers experiment with this, there will be new libraries and standards to emerge.
The View From Here
Also keep tabs on the evolution of the Airbnb web apps by following me at @spikebrehm and the Airbnb Engineering team at @AirbnbEng.