Commit 453d40f8 authored by Armin Rauschenberger's avatar Armin Rauschenberger

automation

parent 11ac779a
......@@ -55,19 +55,29 @@
#' further arguments passed to \code{\link[glmnet]{glmnet}}
#'
#' @references
#' A Rauschenberger, E Glaab (2019)
#' Armin Rauschenberger, Enrico Glaab (2019)
#' "Multivariate elastic net regression through stacked generalisation"
#' \emph{Manuscript in preparation.}
#' \emph{Manuscript in preparation}.
#'
#' @details
#' \strong{correlation:}
#' The \eqn{q} outcomes should be positively correlated.
#' Avoid negative correlations by changing the sign of the variable.
#'
#' elastic net mixing parameters:
#' \strong{elastic net:}
#' \code{alpha.base} controls input-output effects,
#' \code{alpha.meta} controls output-output effects;
#' ridge (\eqn{0}) renders dense models,
#' lasso (\eqn{1}) renders sparse models
#' lasso renders sparse models (\code{alpha}\eqn{=1}),
#' ridge renders dense models (\code{alpha}\eqn{=0})
#'
#' @return
#' This function returns an object of class \code{joinet}.
#' Available methods include
#' \code{\link[=predict.joinet]{predict}},
#' \code{\link[=coef.joinet]{coef}},
#' and \code{\link[=weights.joinet]{weights}}.
#' The slots \code{base} and \code{meta} each contain
#' \eqn{q} \code{\link[glmnet]{cv.glmnet}}-like objects.
#'
#' @examples
#' n <- 30; q <- 2; p <- 20
......@@ -75,7 +85,7 @@
#' X <- matrix(rnorm(n*p),nrow=n,ncol=p)
#' object <- joinet(Y=Y,X=X)
#'
joinet <- function(Y,X,family="gaussian",nfolds=10,foldid=NULL,type.measure="deviance",alpha.base=0,alpha.meta=0,...){
joinet <- function(Y,X,family="gaussian",nfolds=10,foldid=NULL,type.measure="deviance",alpha.base=1,alpha.meta=0,...){
#--- temporary ---
# family <- "gaussian"; nfolds <- 10; foldid <- NULL; type.measure <- "deviance"
......@@ -238,6 +248,11 @@ joinet <- function(Y,X,family="gaussian",nfolds=10,foldid=NULL,type.measure="dev
#' @param ...
#' further arguments (not applicable)
#'
#' @return
#' This function returns predictions from base and meta learners.
#' The slots \code{base} and \code{meta} each contain a matrix
#' with \eqn{n} rows (samples) and \eqn{q} columns (variables).
#'
#' @examples
#' n <- 30; q <- 2; p <- 20
#' #Y <- matrix(rnorm(n*q),nrow=n,ncol=q)
......@@ -303,6 +318,13 @@ predict.joinet <- function(object,newx,type="response",...){
#' @param ...
#' further arguments (not applicable)
#'
#' @return
#' This function returns the pooled coefficients.
#' The slot \code{alpha} contains the intercepts
#' in a vector of length \eqn{q},
#' and the slot \code{beta} contains the slopes
#' in a matrix with \eqn{p} rows (inputs) and \eqn{q} columns.
#'
#' @examples
#' n <- 30; q <- 2; p <- 20
#' Y <- matrix(rnorm(n*q),nrow=n,ncol=q)
......@@ -362,6 +384,14 @@ coef.joinet <- function(object,...){
#' @param ...
#' further arguments (not applicable)
#'
#' @return
#' This function returns a matrix with
#' \eqn{1+q} rows and \eqn{q} columns.
#' The first row contains the intercepts,
#' and the other rows contain the slopes,
#' which are the effects of the outcomes
#' in the row on the outcomes in the column.
#'
#' @examples
#' n <- 30; q <- 2; p <- 20
#' Y <- matrix(rnorm(n*q),nrow=n,ncol=q)
......@@ -414,10 +444,15 @@ print.joinet <- function(x,...){
#'
#' @param mnorm,spls,sier,mrce
#' experimental arguments\strong{:}
#' logical (install packages \code{spls}, \code{SiER}, or \code{MRCE})
#' logical (requires packages \code{spls}, \code{SiER}, or \code{MRCE})
#'
#' @param ...
#' further arguments passed to \code{\link[glmnet]{glmnet}} and \code{\link[glmnet]{cv.glmnet}}
#' further arguments passed to \code{\link[glmnet]{glmnet}}
#' and \code{\link[glmnet]{cv.glmnet}}
#'
#' @return
#' This function returns a matrix with \eqn{q} columns,
#' including the cross-validated loss.
#'
#' @examples
#' n <- 40; q <- 2; p <- 20
......
......@@ -139,6 +139,14 @@ the coefficients from the base learners.)</p>
</tr>
</table>
<h2 class="hasAnchor" id="value"><a class="anchor" href="#value"></a>Value</h2>
<p>This function returns the pooled coefficients.
The slot <code>alpha</code> contains the intercepts
in a vector of length \(q\),
and the slot <code>beta</code> contains the slopes
in a matrix with \(p\) rows (inputs) and \(q\) columns.</p>
<h2 class="hasAnchor" id="examples"><a class="anchor" href="#examples"></a>Examples</h2>
<pre class="examples"><div class='input'><span class='no'>n</span> <span class='kw'>&lt;-</span> <span class='fl'>30</span>; <span class='no'>q</span> <span class='kw'>&lt;-</span> <span class='fl'>2</span>; <span class='no'>p</span> <span class='kw'>&lt;-</span> <span class='fl'>20</span>
......@@ -151,7 +159,9 @@ the coefficients from the base learners.)</p>
<h2>Contents</h2>
<ul class="nav nav-pills nav-stacked">
<li><a href="#arguments">Arguments</a></li>
<li><a href="#value">Value</a></li>
<li><a href="#examples">Examples</a></li>
</ul>
......
......@@ -188,14 +188,20 @@ numeric between \(0\) (ridge) and \(1\) (lasso)</p></td>
<tr>
<th>mnorm, spls, sier, mrce</th>
<td><p>experimental arguments<strong>:</strong>
logical (install packages <code>spls</code>, <code>SiER</code>, or <code>MRCE</code>)</p></td>
logical (requires packages <code>spls</code>, <code>SiER</code>, or <code>MRCE</code>)</p></td>
</tr>
<tr>
<th>...</th>
<td><p>further arguments passed to <code><a href='https://www.rdocumentation.org/packages/glmnet/topics/glmnet'>glmnet</a></code> and <code><a href='https://www.rdocumentation.org/packages/glmnet/topics/cv.glmnet'>cv.glmnet</a></code></p></td>
<td><p>further arguments passed to <code><a href='https://www.rdocumentation.org/packages/glmnet/topics/glmnet'>glmnet</a></code>
and <code><a href='https://www.rdocumentation.org/packages/glmnet/topics/cv.glmnet'>cv.glmnet</a></code></p></td>
</tr>
</table>
<h2 class="hasAnchor" id="value"><a class="anchor" href="#value"></a>Value</h2>
<p>This function returns a matrix with \(q\) columns,
including the cross-validated loss.</p>
<h2 class="hasAnchor" id="examples"><a class="anchor" href="#examples"></a>Examples</h2>
<pre class="examples"><div class='input'><span class='no'>n</span> <span class='kw'>&lt;-</span> <span class='fl'>40</span>; <span class='no'>q</span> <span class='kw'>&lt;-</span> <span class='fl'>2</span>; <span class='no'>p</span> <span class='kw'>&lt;-</span> <span class='fl'>20</span>
......@@ -211,7 +217,9 @@ logical (install packages <code>spls</code>, <code>SiER</code>, or <code>MRCE</c
<h2>Contents</h2>
<ul class="nav nav-pills nav-stacked">
<li><a href="#arguments">Arguments</a></li>
<li><a href="#value">Value</a></li>
<li><a href="#examples">Examples</a></li>
</ul>
......
......@@ -120,7 +120,7 @@
</div>
<pre class="usage"><span class='fu'>joinet</span>(<span class='no'>Y</span>, <span class='no'>X</span>, <span class='kw'>family</span> <span class='kw'>=</span> <span class='st'>"gaussian"</span>, <span class='kw'>nfolds</span> <span class='kw'>=</span> <span class='fl'>10</span>, <span class='kw'>foldid</span> <span class='kw'>=</span> <span class='kw'>NULL</span>,
<span class='kw'>type.measure</span> <span class='kw'>=</span> <span class='st'>"deviance"</span>, <span class='kw'>alpha.base</span> <span class='kw'>=</span> <span class='fl'>0</span>, <span class='kw'>alpha.meta</span> <span class='kw'>=</span> <span class='fl'>0</span>, <span class='no'>...</span>)</pre>
<span class='kw'>type.measure</span> <span class='kw'>=</span> <span class='st'>"deviance"</span>, <span class='kw'>alpha.base</span> <span class='kw'>=</span> <span class='fl'>1</span>, <span class='kw'>alpha.meta</span> <span class='kw'>=</span> <span class='fl'>0</span>, <span class='no'>...</span>)</pre>
<h2 class="hasAnchor" id="arguments"><a class="anchor" href="#arguments"></a>Arguments</h2>
<table class="ref-arguments">
......@@ -177,21 +177,32 @@ numeric between \(0\) (ridge) and \(1\) (lasso)</p></td>
</tr>
</table>
<h2 class="hasAnchor" id="value"><a class="anchor" href="#value"></a>Value</h2>
<p>This function returns an object of class <code>joinet</code>.
Available methods include
<code><a href='predict.joinet.html'>predict</a></code>,
<code><a href='coef.joinet.html'>coef</a></code>,
and <code><a href='weights.joinet.html'>weights</a></code>.
The slots <code>base</code> and <code>meta</code> each contain
\(q\) <code><a href='https://www.rdocumentation.org/packages/glmnet/topics/cv.glmnet'>cv.glmnet</a></code>-like objects.</p>
<h2 class="hasAnchor" id="details"><a class="anchor" href="#details"></a>Details</h2>
<p>The \(q\) outcomes should be positively correlated.
<p><strong>correlation:</strong>
The \(q\) outcomes should be positively correlated.
Avoid negative correlations by changing the sign of the variable.</p>
<p>elastic net mixing parameters:
<p><strong>elastic net:</strong>
<code>alpha.base</code> controls input-output effects,
<code>alpha.meta</code> controls output-output effects;
ridge (\(0\)) renders dense models,
lasso (\(1\)) renders sparse models</p>
lasso renders sparse models (<code>alpha</code>\(=1\)),
ridge renders dense models (<code>alpha</code>\(=0\))</p>
<h2 class="hasAnchor" id="references"><a class="anchor" href="#references"></a>References</h2>
<p>A Rauschenberger, E Glaab (2019)
<p>Armin Rauschenberger, Enrico Glaab (2019)
"Multivariate elastic net regression through stacked generalisation"
<em>Manuscript in preparation.</em></p>
<em>Manuscript in preparation</em>.</p>
<h2 class="hasAnchor" id="examples"><a class="anchor" href="#examples"></a>Examples</h2>
......@@ -205,6 +216,8 @@ lasso (\(1\)) renders sparse models</p>
<ul class="nav nav-pills nav-stacked">
<li><a href="#arguments">Arguments</a></li>
<li><a href="#value">Value</a></li>
<li><a href="#details">Details</a></li>
<li><a href="#references">References</a></li>
......
......@@ -145,6 +145,12 @@ and \(p\) columns (variables)</p></td>
</tr>
</table>
<h2 class="hasAnchor" id="value"><a class="anchor" href="#value"></a>Value</h2>
<p>This function returns predictions from base and meta learners.
The slots <code>base</code> and <code>meta</code> each contain a matrix
with \(n\) rows (samples) and \(q\) columns (variables).</p>
<h2 class="hasAnchor" id="examples"><a class="anchor" href="#examples"></a>Examples</h2>
<pre class="examples"><div class='input'><span class='no'>n</span> <span class='kw'>&lt;-</span> <span class='fl'>30</span>; <span class='no'>q</span> <span class='kw'>&lt;-</span> <span class='fl'>2</span>; <span class='no'>p</span> <span class='kw'>&lt;-</span> <span class='fl'>20</span>
......@@ -159,7 +165,9 @@ and \(p\) columns (variables)</p></td>
<h2>Contents</h2>
<ul class="nav nav-pills nav-stacked">
<li><a href="#arguments">Arguments</a></li>
<li><a href="#value">Value</a></li>
<li><a href="#examples">Examples</a></li>
</ul>
......
......@@ -137,22 +137,33 @@ i.e. the weights for the base learners.</p>
</tr>
</table>
<h2 class="hasAnchor" id="value"><a class="anchor" href="#value"></a>Value</h2>
<p>This function returns a matrix with
\(1+q\) rows and \(q\) columns.
The first row contains the intercepts,
and the other rows contain the slopes,
which are the effects of the outcomes
in the row on the outcomes in the column.</p>
<h2 class="hasAnchor" id="examples"><a class="anchor" href="#examples"></a>Examples</h2>
<pre class="examples"><div class='input'><span class='no'>n</span> <span class='kw'>&lt;-</span> <span class='fl'>30</span>; <span class='no'>q</span> <span class='kw'>&lt;-</span> <span class='fl'>2</span>; <span class='no'>p</span> <span class='kw'>&lt;-</span> <span class='fl'>20</span>
<span class='no'>Y</span> <span class='kw'>&lt;-</span> <span class='fu'><a href='https://www.rdocumentation.org/packages/base/topics/matrix'>matrix</a></span>(<span class='fu'><a href='https://www.rdocumentation.org/packages/stats/topics/Normal'>rnorm</a></span>(<span class='no'>n</span>*<span class='no'>q</span>),<span class='kw'>nrow</span><span class='kw'>=</span><span class='no'>n</span>,<span class='kw'>ncol</span><span class='kw'>=</span><span class='no'>q</span>)
<span class='no'>X</span> <span class='kw'>&lt;-</span> <span class='fu'><a href='https://www.rdocumentation.org/packages/base/topics/matrix'>matrix</a></span>(<span class='fu'><a href='https://www.rdocumentation.org/packages/stats/topics/Normal'>rnorm</a></span>(<span class='no'>n</span>*<span class='no'>p</span>),<span class='kw'>nrow</span><span class='kw'>=</span><span class='no'>n</span>,<span class='kw'>ncol</span><span class='kw'>=</span><span class='no'>p</span>)
<span class='no'>object</span> <span class='kw'>&lt;-</span> <span class='fu'><a href='joinet.html'>joinet</a></span>(<span class='kw'>Y</span><span class='kw'>=</span><span class='no'>Y</span>,<span class='kw'>X</span><span class='kw'>=</span><span class='no'>X</span>)</div><div class='output co'>#&gt; <span class='warning'>Warning: Negative correlation!</span></div><div class='input'><span class='fu'><a href='https://www.rdocumentation.org/packages/stats/topics/weights'>weights</a></span>(<span class='no'>object</span>)</div><div class='output co'>#&gt; y1 y2
#&gt; (Intercept) 0.565532 -0.3929572
#&gt; V1 0.000000 3.3142619
#&gt; V2 2.494110 0.1431946</div><div class='input'>
<span class='no'>object</span> <span class='kw'>&lt;-</span> <span class='fu'><a href='joinet.html'>joinet</a></span>(<span class='kw'>Y</span><span class='kw'>=</span><span class='no'>Y</span>,<span class='kw'>X</span><span class='kw'>=</span><span class='no'>X</span>)</div><div class='output co'>#&gt; <span class='warning'>Warning: Negative correlation!</span></div><div class='input'><span class='fu'><a href='https://www.rdocumentation.org/packages/stats/topics/weights'>weights</a></span>(<span class='no'>object</span>)</div><div class='output co'>#&gt; y1 y2
#&gt; (Intercept) 0.065960017 -0.20281935
#&gt; V1 0.000000000 0.05786884
#&gt; V2 0.002141384 0.02447026</div><div class='input'>
</div></pre>
</div>
<div class="col-md-3 hidden-xs hidden-sm" id="sidebar">
<h2>Contents</h2>
<ul class="nav nav-pills nav-stacked">
<li><a href="#arguments">Arguments</a></li>
<li><a href="#value">Value</a></li>
<li><a href="#examples">Examples</a></li>
</ul>
......
......@@ -11,6 +11,13 @@
\item{...}{further arguments (not applicable)}
}
\value{
This function returns the pooled coefficients.
The slot \code{alpha} contains the intercepts
in a vector of length \eqn{q},
and the slot \code{beta} contains the slopes
in a matrix with \eqn{p} rows (inputs) and \eqn{q} columns.
}
\description{
Extracts pooled coefficients.
(The meta learners linearly combines
......
......@@ -49,9 +49,14 @@ numeric between \eqn{0} (ridge) and \eqn{1} (lasso)}
numeric between \eqn{0} (ridge) and \eqn{1} (lasso)}
\item{mnorm, spls, sier, mrce}{experimental arguments\strong{:}
logical (install packages \code{spls}, \code{SiER}, or \code{MRCE})}
logical (requires packages \code{spls}, \code{SiER}, or \code{MRCE})}
\item{...}{further arguments passed to \code{\link[glmnet]{glmnet}} and \code{\link[glmnet]{cv.glmnet}}}
\item{...}{further arguments passed to \code{\link[glmnet]{glmnet}}
and \code{\link[glmnet]{cv.glmnet}}}
}
\value{
This function returns a matrix with \eqn{q} columns,
including the cross-validated loss.
}
\description{
Compares univariate and multivariate regression
......
......@@ -6,7 +6,7 @@
\title{Multivariate Elastic Net Regression}
\usage{
joinet(Y, X, family = "gaussian", nfolds = 10, foldid = NULL,
type.measure = "deviance", alpha.base = 0, alpha.meta = 0, ...)
type.measure = "deviance", alpha.base = 1, alpha.meta = 0, ...)
}
\arguments{
\item{Y}{outputs\strong{:}
......@@ -41,18 +41,28 @@ numeric between \eqn{0} (ridge) and \eqn{1} (lasso)}
\item{...}{further arguments passed to \code{\link[glmnet]{glmnet}}}
}
\value{
This function returns an object of class \code{joinet}.
Available methods include
\code{\link[=predict.joinet]{predict}},
\code{\link[=coef.joinet]{coef}},
and \code{\link[=weights.joinet]{weights}}.
The slots \code{base} and \code{meta} each contain
\eqn{q} \code{\link[glmnet]{cv.glmnet}}-like objects.
}
\description{
Implements multivariate elastic net regression.
}
\details{
\strong{correlation:}
The \eqn{q} outcomes should be positively correlated.
Avoid negative correlations by changing the sign of the variable.
elastic net mixing parameters:
\strong{elastic net:}
\code{alpha.base} controls input-output effects,
\code{alpha.meta} controls output-output effects;
ridge (\eqn{0}) renders dense models,
lasso (\eqn{1}) renders sparse models
lasso renders sparse models (\code{alpha}\eqn{=1}),
ridge renders dense models (\code{alpha}\eqn{=0})
}
\examples{
n <- 30; q <- 2; p <- 20
......@@ -62,7 +72,7 @@ object <- joinet(Y=Y,X=X)
}
\references{
A Rauschenberger, E Glaab (2019)
Armin Rauschenberger, Enrico Glaab (2019)
"Multivariate elastic net regression through stacked generalisation"
\emph{Manuscript in preparation.}
\emph{Manuscript in preparation}.
}
......@@ -17,6 +17,11 @@ and \eqn{p} columns (variables)}
\item{...}{further arguments (not applicable)}
}
\value{
This function returns predictions from base and meta learners.
The slots \code{base} and \code{meta} each contain a matrix
with \eqn{n} rows (samples) and \eqn{q} columns (variables).
}
\description{
Predicts outcome from features with stacked model.
}
......
......@@ -11,6 +11,14 @@
\item{...}{further arguments (not applicable)}
}
\value{
This function returns a matrix with
\eqn{1+q} rows and \eqn{q} columns.
The first row contains the intercepts,
and the other rows contain the slopes,
which are the effects of the outcomes
in the row on the outcomes in the column.
}
\description{
Extracts coefficients from the meta learner,
i.e. the weights for the base learners.
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or to comment