[colug-432] Deep Learning preso

R P Herrold herrold at owlriver.com
Fri Dec 9 11:45:27 EST 2016


On Thu, 8 Dec 2016, jep200404 at columbus.rr.com wrote:

> On Mon, 24 Oct 2016 10:58:28 -0400, Tom Hanlon <tom at functionalmedia.com> wrote:
> 
> > I took a new job working on the docs and training for DeepLearning4J.
> 
> > I would like to give a talk about it.
> 
> I would like to see it.

As would I

I see at:
	Deep-learning networks perform automatic feature 
extraction without human intervention, unlike most traditional 
machine-learning algorithms. Given that feature extraction is 
a task that can take teams of data scientists years to 
accomplish, deep learning is a way to circumvent the 
chokepoint of limited experts. It augments the powers of small 
data science teams, which by their nature do not scale. [1]


The archetypical error of self-training systems, is that they 
lock onto a non-causative 'signal' or predictor of 
correlation, and take it for causation, rather than mere 
co-incidence.  As I recall, the story goes that an automated 
combat targetting system was trained in bright sunlight, and 
so decided that the presence of a hard shadow was hostile, and 
a good target

How to address here?


also, why in a JVM, rather than other implementation 
environments?  perceived better / easier massive 
lateral scaling?


-- Russ herrold

1. https://deeplearning4j.org/neuralnet-overview


More information about the colug-432 mailing list