GEM (Geometric Empirical Model) AI Neural Network Key Features 

Neural networks have many unknowns. This is one reason why many AI experts have difficulty accepting the fact that all aspects of a neural network can be instantaneously computed. 

 

Construction of all currently existing neural network structures faces the following challenges:

a.      The number of hidden layers is unknown. AI experts use considerable experience to make an educated guess, since only the correct number of layers can possibly find a solution. Even experts must often use trial and error. 

b.      The number of neurons in each layer is unknown. Expertise is also required to reduce trial and error to find these numbers, and experts may have wide disagreement on all of these terms. 

c.      The type of neurons in each layer may vary, from simple to complex, from convolutional to memory cells, or from kernels to convolutions or pools. The different types of neurons is endless, as it is up to the imagination. 

d.      The neuron activation function. This may be a step, linear, sigmoid, tanh, RELU, etc. Again, the list is endless.  

e.      The number and types of input and output links attached to each neuron. All the neurons in one layer may be attached to all the neurons in the next layer, links may connect neurons in other layers, or recurrent links may connect to previous neurons or to the neuron itself.

Thinking in currently existing neural networks also has numerous challenges:

a.      The weight value of the connection link. This is usually determined through back-propagation, a type of gradient search. 

b.      The threshold offset of the neuron. This is also determined through back-propagation. 

c.      The learning rate. Too fast and convergence will be unstable. Too slow and it will take too long to train. Even high learning rates can take considerable time to converge to a solution. 

d.      The solution may converge to a local minimum and have to be retrained. This condition can be difficult or impossible to detect. There are also situations where training never converges on a solution. 

e.      Training rarely if ever finds exact solutions, and even rough approximations with occasional incorrect outputs are considered a great success. 

 

 

GEM AI Neural Network, enabled by GpuScipt, is a breakthrough in mathematics. It is an revolutionary AI neural network which actually models a biological neural network with left and right hemispheres. It automatically and instantaneously determines and constructs:

All the hidden layers*

All the neurons in each layer 

All the links between neurons

All the activation weights for each connection 

The linear and non-linear combinations of inputs at each neuron 

The activation function and threshold for each neuron

 

The following table illustrates some of its key features in comparison to all other currently existing neural networks:

 

 

 

Without GEM AI

With GEM AI

Learning

Learning time increases with complexity and the number of training examples. Trial and error are required to guess the structure of the neural network. Learning may take days, weeks, months, or years, or it may never learn. Learning is also expensive in terms of hardware and power usage.

Instantaneous, with one GPU call. Extremely large training sets may require a few additional GPU calls.

Thinking

This grid cell should remain blank. Current existing neural networks have no concept of thinking.

GEM thinking is similar to how a brain thinks. Based on experience, thinking determines what needs to be done to produce a desire result. It can instantaneously produce optimal designs that are far better than the all human experts combined could accomplish in hundreds of years.

Evaluation

Current existing neural networks are surprisingly fast for evaluation. Not as fast as GEM, but not bad.

All hidden layers are evaluated concurrently, so the number of hidden layers and the number of neurons in each layer has no effect on the GPU execution time. Large numbers of different training inputs can be computed simultaneously, making multiple evaluations extremely fast. 

Generalization

After learning the training set to some degree, the neural network must be tested to ensure adequate interpolation. Extrapolation is usually untrustworthy and is rarely tested. Interpolation is often evaluated with a test set. This testing process is also prone to bias and distortion because the test points may also have errors. The test set may determine that training was successful, when in fact it was inaccurate. Poor generalization means the training must be restarted from the beginning.

GEM performs perfect interpolation and extrapolation, in the sense that it perfectly matches solutions from linear regression, and goes beyond linear regression to non-linear regression, giving the best-fit non-linear hyper-surface**. It can find perfect solutions for matrices, linear regression, process control, any statistical problem, and any AI problem, better and faster than any specialized algorithm in existence. 

Error Correction

Training examples with unknowns are usually removed in preprocessing. Outliers usually are detected and removed manually. Additional code may detect and remove duplicates or closely spaced inputs and perhaps average the outputs.

GEM automatically fills in unknown values, detects and corrects outliers based on an outlier tolerance, corrects jitter and scatter caused by rounding or measurement noise, and determines the minimum number of training examples that can interpolate or extrapolate all the other examples in the training set.

Data Processing

Currently existing neural networks often require careful data preprocessing to reduce dimensionality, remove correlations between inputs, avoid variations in input variance, and other strategies to simplify and linearize data representation.

GEM can handle high dimensionality, high complexity, correlated inputs, and non-linear relationships in the data. Little to no data preprocessing strategies are required.

Machine Learning (ML)

Currently existing neural networks have little to no concept Machine Learning, as this concept requires repeated training cycles that can be very time consuming.

 

GEM Machine Learning (ML) operates similar to the scientific method. First, collect a training set of observations. Based on those observations, use GEM learning and thinking to generate a hypothesis and a predicted result. Then test the hypothesis and prediction by running an additional external experiment. If the prediction does not match the experimental result, then add this example to the training set. Repeat until the prediction matches the experimental result. 

 

GEM ML can find the optimal result with the least number of experiments, superior to block-variance statistics in both accuracy and number of experiments. GEM ML can determine welding parameters that give the best weld, 3D printing parameters that produce the best result, manufacturing parameters that produce the strongest and highest quality parts, far better than all human experts combined could discover through experience and trial and error.

Predictive Analytics

Currently existing neural networks have little to no concept Predictive Analystics, as this concept requires repeated training cycles.

GEM combines learning and thinking to make accurate predictions of data with correlated inputs.

 

 

* Hidden layers are computed sequentially during training. Although this is done in a single GPU call, only one neural network can be trained at a time. 

 

** For example, given the matrix equation Ax=b and a training set with x vectors and corresponding b vectors, GEM can be presented with any x and exactly compute the correct b, or be presented with any b and exactly compute the correct x, for any non-singular matrix of any non-trivial size. GEM can compute these solutions much faster than any existing matrix inversion algorithm, Monte Carlo method, or conjugate gradient search method, even when including the GEM training time.