Experimental Design and Process Optimization with R
Gerhard Krennrich
1 Introduction
The present document is a short and elementary course on the Design of Experiments (DoE) and empirical process optimization with the open-source Software R . The course is self-contained and does not assume any preknowledge in statistics or mathematics beyond high school level. Statistical concepts will be introduced on an elementary level and made tangible with R-code and R-graphics based on simulated and real world data. So, then, what is DoE and why should the reader become familiar with the concepts of DoE? Very briefly, DoE is the science of varying many experimental parameters in a systematic way to gain insight on how to further improve and optimize these parameters. Chapter 2 will show how and why multidimensional DoE techniques are superiour to the classical “one-dimensional” optimization approach. Chapter 6 will demonstrate why and how DoE can be combined with optimization. Finally, the use of DoE and optimization will be practically demonstrated in chapter 7 for improving the performance of a catalytic system. Historically, Experimental Design started as a branch of statistics in the early years of the 20 th century and has meanwhile grown into a mature method with a plethora of applications in the experimental sciences. Consequently, there are many good and comprehensive books available about DoE, some of which we will make frequent reference to in the present text, namely (George E.P. Box, Norman R. Draper 1987 ) , (D.C. Montgomery 2013 ) and (G.E.P. Box, W.G. Hunter, J.S. Hunter 2005 ) . A more recent text with emphasis on the use of R in conjuction with DoE is (John Lawson 2015 ) . Linear models are comprehensively covered, e.g., by the text book (A. Sen, M. Srivastava 1990 ) . A general, however fairly technical text on linear and nonlinear statistical model building is the excellent book (T. Hastie, R. Tibshirani, J. Friedman 2009 ) . (J.G. Kalbfleisch 1985 ) is a smooth introduction into statistics, probability and statistical inference. The present text draws on these books and on many years of experience as a statistical consultant in the chemical industry. Most examples in this course are therefore taken from applications and optimization projects in the chemical sciences. The primarily intended readers of this document are chemists and engineers entrusted with empirical optimization in research and development. However, the presented methods and concepts are fairly generic and scientist working in other areas such as biology or the medical sciences might benefit from the text. As to software, R, probably together with Phyton, is the only open-source software which combines the whole spectrum of DoE and optimization with the flexibility of a powerful script language that allows any kind of data pre- and postprocessing within one software environment. That makes, in my opinion, R superior to many commercial GUI based tools which often buy userfriendlyness at the expense of flexibility.
1.1 How to install R
The R-software can be downloaded free of charge from the R repository CRAN
An IDE ( I ntegrated D evelopment E nvironment) is reqired for smoothly working with R. An IDE allows editing, running and debugging of R code and managing programm in- and output. In principle any IDE can be used but we recommend R-Studio as the de-facto standard.
Get R-Studio IDE
The R-introduction at CRAN is a concise introduction into the R-language. A short R-introduction
1.2 Some remarks on how to read the present text
This document is not an introduction into the R language, rather the document follows the philosophy of “learning by doing”. In this spirit the above mentioned text R-introduction is recommended as a first reference together with the present R examples on DoE and optimization. As it is usually easier to modify existing code than writing code from scratch, it is hoped that the R-examples in this course will help learning both R and DoE more rapidly. The course is divided into seven chapters. There is, however, one stand-alone chapter, chapter 5, which can be skipped by those readers not explicitly dealing with mixture problems. The final chapter 7 is a published, (Siebert M., Krennrich G., Seibicke M., Siegle A.F., Trapp O. 2019 ) , real-world example combining many elements of DoE and optimization for improving the performance of a catalytic system. This application should encourage readers to use these powerful methods for the sake of their own projects.
A. Sen, M. Srivastava. 1990. Regression Analysis, Theory, Methods and Applications . 1st ed. Springer-Verlag, New York.
D.C. Montgomery. 2013. Design and Analysis of Experiments . 8th ed. John Wiley & Sons Inc.
G.E.P. Box, W.G. Hunter, J.S. Hunter. 2005. Statistics for Experimenters: Design, Innovation, and Discovery . 2nd ed. John Wiley & Sons, Hoboken.
George E.P. Box, Norman R. Draper. 1987. Empirical Model-Building and Response Surfaces . 1st ed. John Wiley & Sons.
J.G. Kalbfleisch. 1985. Probability and Statistical Inference, Vol 1&2 . 2nd ed. Springer.
John Lawson. 2015. Design and Analysis of Experiments with R . 1st ed. Chapman & Hall.
Siebert M., Krennrich G., Seibicke M., Siegle A.F., Trapp O. 2019. “Identifying High-Performance Catalytic Conditions for Carbon Dioxide Reduction to Dimethoxymethane by Multivariate Modelling.” Chemical Science 10:45. https://pubs.rsc.org/en/content/articlelanding/2019/sc/c9sc04591k#!divAbstract .
T. Hastie, R. Tibshirani, J. Friedman. 2009. The Elements of Statistical Learning . 2nd ed. Springer-Verlag.
R news and tutorials contributed by hundreds of R bloggers
Design of experiments with mixtures and their analysis with r.
Posted on June 30, 2022 by R in the Lab in R bloggers | 0 Comments
Using R for design and analysis of results for experiments with mixtures.
“Mixtures are absolutely everywhere you look. Anything you can combine is a mixture.” (chem4kids.com)
All the code and data in this post are available in the repository: Design of Experiments with Mixtures and their Analysis with R
What are experimental designs with mixtures?
These are designs aimed at determining the effect of the proportion of different components of a mixture on one or more response variables.
We must emphasize that we are referring to the proportions of the different components in the mixture and not to their absolute amount. That is, it is the proportion that determines the effect.
This type of design has application in the formulation of many products such as beverages, foods, fuels, paints, etc.
In an experimental design of mixtures, the sum of the proportions of each component is equal to 1:
And the limits of the proportion of each component must be between 0 and 1:
In a practical problem, calculating the proportions of each component is straightforward. For example, suppose that the sum of three components in a soda equals 2.5 g, and the respective amounts of each component are 1, 1, and 0.5 g. The ratio for the first and second ingredient is 1/2.5 = 0.4, and the ratio for the third ingredient is 0.5/2.5 = 0.2. Thus 0.4 + 0.4 + 0.2 = 1.
An experimental design of mixtures will help us determine the proportions of each component to produce the best flavor or to reduce some undesirable physical property in the liquid, for example.
Types of mixture designs and their generation in R
In this post I will focus on two types of mix designs: simplex-lattice and simplex-centroid. The generation of these designs is simple with the mixexp package .
Simplex-lattice design
The simplex-lattice design considers q components and allows fitting a model of order m to the experimental data. To generate a design with 3 components of order 3 we use the function SLD() as follows:
It should be noted that it is not necessary to specify the levels (proportions) of each component, as these are automatically determined by the ratio:
If the proportions of each line are added together, the result will be equal to 1. In addition, for three components it is possible to use the DesignPoints() function to visualize the experimental region of the experiment:
In this figure, the three vertices correspond to pure mixtures (formed by a single ingredient), the three sides or edges represent binary mixtures that have only two of the three components. The interior points of the triangle represent the ternary mixtures in which the three ingredients are different from zero.
Finally, the design can be exported to our working folder with the function write.csv() :
Simplex-centroid design
If predictions are to be made within the experimental region, it is important to include centroid points within the experimental region. The simplex-centroid design includes all intermediate mixtures between components. The SCD() function is used to generate it:
By visualizing the experimental region with three components, it becomes much clearer what we mean by intermediate mixtures:
Mixing designs with component constraints
It is normal that due to technical or economic constraints, for example, the proportion of one or more components is restricted to a shape limit:
It is possible to generate designs considering the constraints for each component with the Xvert() function:
It is also possible to visualize the experimental subregion:
Analysis of the results of a mix design
For the example analysis, I will use the data published in Performance of reduced fat-reduced salt fermented sausage with added microcrystalline cellulose, resistant starch and oat fiber using the simplex design . In this study, the effect of the proportion of three ingredients on different characteristics of fermented sausages was determined. In this case I will only focus the analysis on one of the response variables (hardness).
Data import
As usual, the first step in the analysis is to import our data into R:
Model adjustment
You can use the lm() function to adjust a complete model or, for the same purpose, the MixModel() function of the mixexp package:
Note that these models did not include the mean or intercept. Due to the restriction of the sum of components is equal to 1, the parameters in the model are not unique. Basically, the model without mean eliminates the problem of dependence between the coefficients. As we will see later, the interpretation of each coefficient and the hypothesis testing related to each must be done in a special way for this type of design.
Model coefficients, their interpretation and determination coefficients
The summary() function will display a complete report with the coefficients of the model we previously selected, as well as the coefficients of determination:
In general, the coefficients in this type of model are interpreted as follows:
- Coefficients of individual components. They do not measure the overall effect of component xi , but only estimate the value of the response at the vertex of the simplex. If these coefficients are not significant, it does not mean that the effect of the individual component is not important, so hypothesis tests on them are usually ignored.
- Coefficients of double interactions. If the sign of this coefficient is positive, there is synergy between the components; if it is negative, there is antagonism between them.
- Triple interaction coefficients. Quantifies the effect of the ternary mixture within the simplex.
The result report can be easily saved with the capture.output() function:
step() function to improve the model
In order to improve the coefficients of determination or simplify the model, sometimes non-significant terms are eliminated. This can be done somewhat subjectively by trial and error by eliminating one or more terms and then comparing with the full model. R also offers a systematic way to do the above using the step() function, this function uses the Akaike information criterion iteratively to simplify and/or improve the coefficients of determination of the model.
The function can display a large number of results depending on the number of iterations it makes to simplify the model, so in this example I will directly save the results in a text file:
Subsequently, we only need to adjust the simplified model:
By displaying a summary of results it can be seen that there is not a big difference between the coefficients of determination of this model and the full model. However, the smaller number of terms may have some practical advantage depending on the problem:
Lack-of-fit test
Another way to evaluate the quality of the model fit, if there is more than one repetition for any of the treatments, is by means of a lack-of-fit test . This can be done directly with the pureErrorAnova() function of the alr3 package:
For the simplified model:
In this test, if the p-value obtained for Lack of fit is greater than 0.05, or at the significance level established by the experimenter, it can be concluded that the model fits the data adequately. Note how with the full model we came close to rejecting the hypothesis of lack of fit, while with the simplified model the situation improved somewhat.
Visualization of the simplified model in two dimensions
It is possible to make a contour plot with the fitted model, only for the case of three components in the mixture:
The ModelPlot() function is also included in the mixexp package.
The graph can be exported in png format, for example, as follows:
Mixture effect plot
Another way to plot the results is by using an effect plot for the components of the mixture. This two-dimensional plot can be useful if you have more than three components in the mixture. To do this we can use the ModelEff() function included in mixexp :
ModelEff() displays the components in the same order as specified in the fitted model, so x1 corresponds to MCC, x2 corresponds to RS and x3 corresponds to OF. This plot starts with a reference mixture (usually the center of the experimental region) and shows how the response changes as one of the components increases or decreases in the mixture; when one of the components changes, the rest increase or decrease proportionally. The disadvantage of ModelEff() is that only complete, not simplified, models can be used to make the plot.
The effect graph can be exported in the same way as the contour graph:
If the reader would like to consult more examples of analysis with the mixexp package, please check the document at the following link: Mixture Experiments in R Using mixexp .
Very good! That’s all for this post, thank you very much for visiting this blog.
Juan Pablo Carreón Hidalgo 🤓
[email protected] https://github.com/jpch26
This work is licensed under a Creative Commons Attribution 4.0 International License .
Copyright © 2022 | MH Corporate basic by MH Themes
Never miss an update! Subscribe to R-bloggers to receive e-mails with the latest R posts. (You will not see this message again.)
Current state of R packages for the design of experiments
Your analytical toolkit matters very little if the data are no good. Ideally you want to know to how the data were collected before delving into the analysis of the data; better yet, get involved before the collection of data and design its collection. In this post I explore some of the top downloaded R packages for the design of experiments and analysis of experimental data.
Monash University
February 3, 2021
Data collection
As many know, it doesn’t matter how good your analytical tools is if your data are rubbish. This sentiment is often captured in the expression “garbage in, garbage out”. It’s something we all seem to know but there is still a tendency for many of us to place a greater focus on the analysis 1 . This is perhaps all natural given that a potential for discovery is just so much more exciting than ensuring the quality of the collected data.
So what is considered as good quality data? A lack of error in the data? Data containing enough range of variables and sample size for the downstream analysis? Giving an explicit definition of a good quality data is a fraught exercise, but if you know how the data were collected then you can better perform the initial data analysis ( Chatfield 1985 ) to weed out (or fix) potential poor quality data. This step will likely get more value out of the data than fitting complex models to poor quality data.
Better than knowing how the data were collected, if you can design the collection of data so that it’s optimised for the purpose of the analysis 2 , then you can potentially get even a better value out of your data. Not all data collection starts with an explicit analytical plan though. Furthermore, you may have very little control of how the data are collected. Often these are observational data or making a secondary use of experimental data . This article will focus on data collection of an experiment where you have some control of the collection process.
Experimental data
All experiments are conducted with some objective in mind. This could be that a scientist may wish to test their hypothesis, a manufacturer wants to know which manufacturing process is better or a researcher wants to understand some cause-and-effect relationships. A characteristic part of an experiment is that the experimenter has control over some explanatory variables. In a comparative experiment , the control is over the allocation of treatments to subjects. Designing an experiment in the statistics discipline usually focus on this allocation, although it’s important to keep in mind that there are other decision factors in an experiment.
Data that are collected from experiments are what we refer to as experimental data . Because it was collected with some objective in mind followed by some data collection plan, experimental data are often thought of to be better quality than observational data. But then again if you can’t quantify the quality of data, you can’t really tell. Certain scientific claims (e.g. causation, better treatment) can only be substantiated by experiments and so experimental data is held to a higher standard in general.
Design and analysis of experiments
There are all together 83 R-packages in the CRAN Task View of Design of Experiments & Analysis of Experimental Data as of 2022-09-18. 3 I’m going to refer these packages as DoE packages , although there are some packages in the mix that are more about the analysis of experimental data rather than the design of experiments and there are some packages that are missing in the list (e.g. DeclareDesign ). The DoE packages make up about 0.4% of the 18,592 packages available in CRAN.
The DoE packages don’t include survey design. These instead belong to the CRAN Task View of Official Statistics & Survey Methodology which contains 122 packages. While some surveys are part of an experimental study, most often they generate observational data.
Below I have a number of different analysis for these DoE packages. If you push the button on the top right corner of this article, you can toggle the display for the code or alternatively you can have a look at the source Rmd document.
Bigram of DoE package titles and descriptions
Table @ref(tab:bigram-title) shows the most common bigrams in the title of the DoE packages. It’s perhaps not surprising but the words “optimal design” and “experimental design” are the top. It’s also likely that the words “design of experiments” appears often but because this is a bigram (two consecutive words) so it doesn’t appear. You might then wonder if that’s the case words like “design of” or “of experiments” should make an appearance, however “of” is a stop word and these are filtered out otherwise unwanted bigrams come up on the top.
There are couple of words like “clinical trial” and “dose finding” that suggests applications in medical experiments, as well as “microarray experiment” that suggests application in bioinformatics.
The title alone might be too succinct for text analysis so I also had a look at the most common bigrams in the description of the DoE packages as shown in Table @ref(tab:bigram-desc). The counts in Table @ref(tab:bigram-desc) (and also Table @ref(tab:bigram-title)) is across the DoE packages. To be more clear, even if the bigram is mentioned multiple times within the description, it’s only counted once per package. This removes the inflation of the counts due to one package mentioning the same bigram over and over again.
Again not surprisingly “experimental design” and “optimal design” comes on top in the DoE package descriptions. The words “graphical user” and “user interface” implies that the trigram “graphical user interface” was probably common.
Network of DoE package imports and dependencies
Figure @ref(fig:doe-network) shows the imports and dependency between the DoE packages. We can see here that DoE.wrapper imports a fair number of DoE packages that results in the major network cluster see in Figure @ref(fig:doe-network). AlgDesign and DoE.base are imported into four other DoE packages so form an important base in the DoE world.
(ref:network) The network of imports and dependency among DoE packages alone. Each node represents a DoE package. DoE packages with no imports or dependency on other DoE packages are excluded. Each arrow represents the relationship between the packages such that the package on the tail is used by package on the head of the arrow.
CRAN download logs
Figure @ref(fig:download-hist) shows the distribution of the total download counts over the last 5 years 4 of the DoE packages. This graph doesn’t take into account that some DoE packages may only have been on CRAN in the last 5 years so the counts are in favour of DoE packages that’s been on CRAN longer.
(ref:hist) Histogram of the total download count over last 5 years of the DoE packages.
Top 5 DoE packages
The top 5 downloaded DoE packages at the time of this writing are AlgDesign , lhs , DiceDesign , DoE.base , and FrF2 . You can see the download counts in Figure @ref(fig:download-barplot).
(ref:barplot) The above barplot shows the total downloads of the top 5 downloaded DoE packages from the period 2017-09-18 to 2022-09-16.
We can have a look at further examination of the top 5 DoE packages by looking at the daily download counts as shown in Figure @ref(fig:download-barplot). The download counts are the raw values and these include downloads by CRAN mirror and bots. There is a noticeable spike when there is an update to the CRAN package. This is partly because when there is a new version of the package, when you install other packages that depend or import it then R will prompt you to install the new version. This means that the download counts are inflated and to some extent you can artificially boost them by making regular CRAN updates. The adjustedcranlogs ( Morgan-Wall 2017 ) makes a nice attempt to adjust the raw counts based on a certain heuristic. I didn’t use it since the adjustment is stochastic and I appear to have hit a bug .
(ref:timeplot) The above plot shows the daily downloads of the top 5 downloaded DoE packages from the period 2017-09-18 to 2022-09-16. The vertical dotted bar corresponds to the date that a new version of the corresponding package was released on CRAN.
Here we have a closer look at the functions of the top 5 downloaded DoE packages below ordered by their download counts.
- AlgDesign CRAN GitHub Wheeler ( 2019 ) Algorithmic Experimental Design Originally written by Bob Wheeler but Jerome Braun have taken over maintenance of the package.
- agricolae CRAN de Mendiburu ( 2020 ) Statistical Procedures for Agricultural Research Written and maintained by Felipe de Mendiburu
- lhs CRAN GitHub Carnell ( 2020 ) Latin Hypercube Samples Written and maintained by Rob Carnell
- ez CRAN GitHub Lawrence ( 2016 ) Easy Analysis and Visualization of Factorial Experiments Written and maintained by Michael A. Lawrence
- DoE.base CRAN Grömping ( 2018 ) Full Factorials, Orthogonal Arrays and Base Utilities for DoE Packages Written and maintained by Ulrike Groemping.
Before we look at the packages, let’s set a seed so we can reproduce the results.
To start off, we begin with the most downloaded DoE package, AlgDesign . The examples below are taken directly from the vignette of the AlgDesign package .
You can create a balanced incomplete block design using the optBlock function. It’s using an optimal design framework where the default criterion is D criterion and the implied model is given in the first argument.
AlgDesign also includes helper functions to generate a factorial structure.
This can be an input to specify the design using another function, say with optFederov which uses Federov’s exchange algorithm to generate the design.
If you want to further randomise within blocks, you can pass the above result to optBlock .
agricolae is motivated by agricultural applications although the designs are applicable across a variety of fields.
The functions to create the design all begin with the word “design.” and the names of the functions are remnant of the name of the experimental design. E.g. design.rcbd generates a Randomised Complete Block Design and design.split generates a Split Plot Design.
Rather than going through each of the functions, I’ll just show one. The command below generates a balanced incomplete block design with 7 treatments of block size 3. This the same design structure as the first example for AlgDesign . What do you think of the input and output?
More examples are given in the agricolae tutorial .
The lhs package is completely different to the previous two packages. It implements methods for creating and augmenting Latin Hypercube Samples and Orthogonal Array Latin Hypercube Samples. The treatment variables here are the parameters and are continuous. In the example below, there are 10 parameters were 30 samples will be drawn from.
lhs provides a number of methods to find the optimal design each with their own criteria.
This is mainly focussed on the analysis of experimental data but some functions such as ezDesign is useful for viewing the experimental structure.
DoE.base provides utility functions for the special class design and as seen in Figure @ref(fig:doe-network), DoE.base is used by four other DoE packages that is maintained also by Prof. Dr. Ulrike Grömping .
DoE.base contains functions to generate factorial designs easily.
It also contains functions to create orthogonal array designs.
If you need to further randomise within a specified block, you can do this using rerandomize.design .
So those were the top 5 DoE packages. The API of the packages are quite distinct. The object that it outputs can vary from a matrix to a list. DoE might be a dull area for many but it’s quite important for the downstream analysis. Perhaps if many of us talk more about it, it may help invigorate the area!
At least from my teaching experience, statistics subjects are primary about the analysis and most research grants I’ve seen are about an analytical method. The analytical focus is reflected also in the R packages; there are 1,907 R-packages on CRAN with the word “analysis” in the title as opposed to 287 R-packages with the word “design” in its title. ↩︎
Keeping in mind though that your analysis plan may change once you actually have collected data. This is quite common in the analysis of plant breeding trials since some spatial variation only become apparent only after the data collection. ↩︎
I originally had a webscrapping error where I didn’t remove duplicate entries so numbers presented at TokyoR and SSA Webinar had the wrong numbers. ↩︎
As of 2022-09-18. ↩︎
Introduction to Econometrics with R
13 experiments and quasi-experiments.
This chapter discusses statistical tools that are commonly applied in program evaluation, where interest lies in measuring the causal effects of programs, policies or other interventions. An optimal research design for this purpose is what statisticians call an ideal randomized controlled experiment. The basic idea is to randomly assign subjects to two different groups, one that receives the treatment (the treatment group) and one that does not (the control group) and to compare outcomes for both groups in order to get an estimate of the average treatment effect.
Such experimental data is fundamentally different from observational data. For example, one might use a randomized controlled experiment to measure how much the performance of students in a standardized test differs between two classes where one has a “regular”” student-teacher ratio and the other one has fewer students. The data produced by such an experiment would be different from, e.g., the observed cross-section data on the students’ performance used throughout Chapters 4 to 8 where class sizes are not randomly assigned to students but instead are the results of an economic decision where educational objectives and budgetary aspects were balanced.
For economists, randomized controlled experiments are often difficult or even infeasible to implement. For example, due to ethic, moral and legal reasons it is practically impossible for a business owner to estimate the causal effect on the productivity of workers of setting them under psychological stress using an experiment where workers are randomly assigned either to the treatment group that is under time pressure or to the control group where work is under regular conditions, at best without knowledge of being in an experiment (see the box The Hawthorne Effect on p. 528 of the book).
However, sometimes external circumstances produce what is called a quasi-experiment or natural experiment . This “as if” randomness allows for estimation of causal effects that are of interest for economists using tools which are very similar to those valid for ideal randomized controlled experiments. These tools draw heavily on the theory of multiple regression and also on IV regression (see Chapter 12 ). We will review the core aspects of these methods and demonstrate how to apply them in R using the STAR data set (see the description of the data set).
The following packages and their dependencies are needed for reproduction of the code chunks presented throughout this chapter:
- AER ( Christian Kleiber and Zeileis 2008 ) ,
- dplyr ( Wickham et al. 2023 ) ,
- MASS ( Ripley 2023 ) ,
- mvtnorm ( Genz et al. 2023 ) ,
- rddtools ( Stigler and Quast 2022 ) ,
- scales ( Wickham and Seidel 2022 ) ,
- stargazer ( Hlavac 2022 ) ,
- tidyr ( Wickham, Vaughan, and Girlich 2023 ) .
Make sure the following code chunk runs without any errors.
IMAGES
VIDEO
COMMENTS
1 Introduction. The present document is a short and elementary course on the Design of Experiments (DoE) and empirical process optimization with the open-source Software R.The course is self-contained and does not assume any preknowledge in statistics or mathematics beyond high school level.
Experiments with a single factor. When the objective is to study the effect of a single factor on the mean of a response variable taking into account more than two values of the factor under consideration, it is advisable to perform a completely randomized experiment and analyze the results by analysis of variance (ANOVA).
Package 'experiment' October 13, 2022 Version 1.2.1 Date 2022-04-07 Title R Package for Designing and Analyzing Randomized Experiments Maintainer Kosuke Imai <[email protected]> Depends boot, MASS, R (>= 2.4.0) Description Provides various statistical methods for designing and analyzing randomized experiments. One functionality
This task view collects information on R packages for experimental design and analysis of data from experiments. Packages that focus on analysis only and do not make relevant contributions for design creation are not considered in the scope of this task view. Please feel free to suggest enhancements, and please send information on new packages or major package updates if you think they belong ...
„Design of Experiments and Analysis of Experimental Data" (or brief: Experimental Design) started February 2008 currently contains 37 . R. packages related to Design of Experiments Main purposes Pointer to existing functionality support synergies, avoid double work
Using R for design and analysis of results for experiments with mixtures. "Mixtures are absolutely everywhere you look. Anything you can combine is a mixture." (chem4kids.com) All the code and data in this post are available in the repository: Design of Experiments with Mixtures and their Analysis with R
There are all together 83 R-packages in the CRAN Task View of Design of Experiments & Analysis of Experimental Data as of 2022-09-18. 3 I'm going to refer these packages as DoE packages, although there are some packages in the mix that are more about the analysis of experimental data rather than the design of experiments and there are some packages that are missing in the list (e.g ...
Title Design and Analysis of Experiments with R Version 1.2-11 Date 2023-09-04 Maintainer John Lawson <[email protected]> Description Contains Data frames and functions used in the book ``Design and Analysis of Experi-ments with R'', Lawson(2015) ISBN-13:978-1-4398-6813-3. License GPL-2 Depends R (>= 3.5.0)
Beginners with little background in statistics and econometrics often have a hard time understanding the benefits of having programming skills for learning and applying Econometrics. 'Introduction to Econometrics with R' is an interactive companion to the well-received textbook 'Introduction to Econometrics' by James H. Stock and Mark W. Watson (2015). It gives a gentle introduction to ...
R-3..1-win.exe and follow the instructions. Use defaults when prompted. This creates a blue R icon on your desktop. Double-clicking this R icon opens up an R session. ouY close the session by typing q()or quit(). This prompts you to save workspace image (with all changes in current session) or not (leave the workspace as it was when