Welcome to SkyRELR!

SkyRELR is an on-demand portal for automated machine learning using the RELR technology that is implemented in the Amazon Cloud.   In addition to a user-friendly GUI that allows point and click automated machine learning model builds, we provide a number of Python scripts for various tasks such as visualizing results or automatic model building/scoring that can be accessed through an API that is also part of a SkyRELR cloud instance.

SkyRELR is designed for those who understand the value of accurate, stable, automated machine learning that may allow causal discovery. A temporary license to a lighter weight 2 CPU core virtual SkyRELR cloud instance can be purchased with 30 Day access through the Product page of this website. This temporary SkyRELR cloud product is largely designed for one-off model building and scoring in lighter data environments.  Those organizations with heavier data or embedded production needs may purchase extended long term licenses including license to the Amazon Machine Image that is used to build SkyRELR instances that can access up to 40 CPU core virtual servers on demand in the Amazon Cloud.  The Amazon Machine Image of SkyRELR easily can be integrated with Python Big Data modules such as PySpark to allow many simultaneous instances in complex networks and clusters. These long term licenses also may be perpetual and include Python and Linux source code and exclusive licensing or sublicensing rights to the patented technology for commercial application.

Because of its ability to generate accurate predictions that replicate well and quite often allow very parsimonious, putative causal insights, the patented RELR technology has been used by our customers to target major media advertising for many of the world’s well-known consumer brands for over 5 years.  The advantage to using this cloud implementation is that RELR is a proven stable machine learning technology where companies can purchase exclusive rights to this technology in their industry to ensure that their competitors cannot access it.  This can be a huge advantage in most commercial applications.  The alternative to RELR is for a company to spend years in R&D with unreliable open source machine learning where any hard won small successes are unprotected from competitors if publicized as often will be required as machine learning becomes more regulated.  In contrast, RELR allows companies to enjoy the immediate benefits of commercially viable and scalable stable Big Data machine learning technology with patent protection and without large R&D algorithm development costs.  RELR has very general application across all areas of machine learning from face and image recognition to speech and natural language processing to genomic and health application to financial and business applications.

What is RELR?
RELR (pronounced ‘RELLER’) is the acronym for Reduced Error Logistic Regression.  RELR is a neuromorphic algorithm designed to model the deep explicit and implicit learning mechanisms of neurons through a form of logistic regression that automatically estimates the probability of error and removes this error to generate stable, accurate rapid implicit predictions and/or parsimonious explicit predictions.   Compared to other deep learning neuromorphic algorithms and other machine learning, RELR’s biggest advantage is that it returns very stable and accurate models that replicate well, so the predictions and selected features can be trusted given minimal training sample sizes even with high dimension and multicollinear input data.  For example, Explicit RELR allows parsimonious ‘explanatory models’ which can be interpreted in terms of the putative causal reason for the predictions because these insights replicate across randomly assigned, independent training observations, as shown by this model of the financial market crash of 2009.

RELR was invented by Daniel M. Rice, who also discovered that Alzheimer’s disease has a preclinical period of at least 10 years. This preclinical Alzheimer’s discovery was based upon an objective explanatory and putative causal predictive model like possible using Explicit RELR that gave conclusions that were replicated and ultimately validated in longitudinal studies.  This discovery completely defied the subjective preconceived biases of “experts”.  RELR is a machine learning tool designed to automate this scientific discovery process where the most likely predictions generated by data have priority over expert opinions.

Unlike deep artificial neural networks which require substantial subjective or arbitrary modeler choices with significant  subjective and arbitrary back and forth tuning, RELR is completely automated and involves no subjective or arbitrary human decisions to build a model given the input data.   Our customers tell us that it used to take them weeks or longer to select an often unreliable predictive model with traditional algorithms, whereas RELR builds a reliably accurate model automatically.

RELR is “not your grandfather’s logistic regression”, as this SkyRELR implementation yields models that are equivalent to a four layer deep learning artificial neural network in terms of deciphering complex interactions between inputs, along with nonlinear effects. But unlike artificial neural networks which are based upon a 1980’s understanding of neuroscience, RELR models the deep learning that is believed to occur within neurons, as 21st century neuroscience now believes that substantial deep learning likely occurs within neurons.  RELR also has significant advantages over 1980’s artificial neural networks in not being a black box.  So even Implicit RELR models, which unlike Explicit RELR models are not parsimonious and thus are not interpretable, are still transparent and can be debugged if necessary in production environments.

In more complex applications such as image recognition, RELR’s models can be stacked hierarchically to allow deeper layers of processing.  For example, a more surface level RELR model may classify whether a part of an image is a part of a face, part of a lexical representation, or other part of an animate or inanimate object using convolutional neural feature representations. Other deeper layer RELR may further classify these image components much like occurs in mammalian visual processing.  By stacking individual RELR models hierarchically, very deep neural network architectures with many layers beyond the four possible layers in a single RELR model may be built.

Why Does RELR’s Stability Have Important Advantages Over Ensembles?
Standard machine learning algorithms, including deep learning back-propagation neural networks, do not produce predictions that replicate well with complex high dimension data which have considerably greater noise and tendencies for human bias compared to low dimension data.  In contrast, RELR’s patented, built-in, automatic error rejection removes sampling error, data error, rounding error, multicollinearity and other forms of error that cause models not to replicate.  RELR’s reliable replication also occurs because its modeling methods are completely automated and do not have any user-defined subjective parameters that would allow biases or arbitrary guesses or subjective back and forth tuning on the part of model builders.    These case examples highlight the very high correlation in automated RELR predictions across models built  from independent randomly split development data.

If data scientists produce predictions that would be contradicted by other data scientists or across independent randomly split model development observations, then the “science” is a problem as which set of predictions should we believe?  Said another way, the confidence for any predictions which show wide variation across modelers or independent data becomes very poor and predictions cannot be trusted.

Unlike RELR, other automated algorithms do not control the data and modeling error unless if they average many different models in an ensemble average. One problem is that human arbitrary and subjective choices are involved in how to produce the weighted ensemble average.  So though automated, these ensemble algorithms are still often arbitrarily determined by human subjective choices and/or bias.  In addition, ensemble averaging also leads to models that are not easily interpreted as causal insights due to the many predictive features included in this average.

And the ensemble averaging process also may create new predictive importance bias.  For example, Random Forest utilizes ensemble averaging of elementary decision trees. But, the process of creating the ensemble average in Random Forest results in the importance of correlated predictors and predictors with more categories being greatly amplified relative to other predictors as first shown in this well cited research article.   Although these researchers have proposed new ways to overcome the biases related to correlated features, they acknowledge in this later article that these methods do not work better in general and can lead to spurious, inaccurate predictions.   In most types of data, it is usually not possible to know in advance whether predictive importance biases are a severe problem in Random Forests and other ensemble methods, so predictive problems related to these artifacts only will be discovered too late in real world data.

To reiterate, RELR automatically generates most probable predictive models that completely avoid the predictive biases and complexity of processing related to ensemble averaging, yet they are very stable models.  For this and other important reasons, RELR has shown reliably good validation sample accuracy in comparison to all other algorithms that it has been tested against in well controlled tests including Stochastic Gradient Learning, Random Forests type Bagging, LASSO, LARS, L2-Regularized Logistic Regression, Bayesian Networks, Artificial Neural Networks, Support Vector Machines, Partial Least Squares, Decision Trees, and Stepwise Logistic Regression as shown across our Case Studies.