615f43c550
support AddSign and PowerSign in config
2017-09-25 06:09:36 +00:00
fe54002671
remove DumbLearner
2017-09-25 06:09:07 +00:00
9a7ffe5f0d
add AddSign and PowerSign optimizers
2017-09-25 04:02:17 +00:00
5c83f063be
remove keras stuff so it won't show on google
...
bleh
2017-09-18 04:42:41 +00:00
d38e2076f0
allow multi-input and multi-output models
2017-09-16 18:28:05 +00:00
3386869b30
move actreg tweaking into if statement
...
i was getting division by zero.
2017-09-16 17:33:47 +00:00
65ba80bb96
skip over irrelevant nodes
2017-09-16 17:31:39 +00:00
dcbaef3032
use MomentumClip in warmup for stability
2017-09-16 17:30:52 +00:00
7878f94f43
auto-increment loss filenames to stop clobbering
2017-09-16 17:30:02 +00:00
e22316a4c9
move losses into Model and refactor methods
2017-09-16 17:05:25 +00:00
910facf98d
move NLL to core
2017-08-05 10:59:05 +00:00
0b9c1fe117
allow SGDR to anneal optimizer's learning rate
...
e.g. YellowFin
2017-08-05 10:43:38 +00:00
dbd6c31ea5
fix final rate calculation
2017-08-05 10:43:18 +00:00
915b39d783
allow Optimizers to inspect Models (currently unused)
...
the thing that takes advantage of this may or may not be committed,
so this may or may not get reverted.
2017-08-05 10:41:35 +00:00
de5af4f7f4
allow argument passthru to normalizer in _mr_make_norm
2017-08-05 10:40:39 +00:00
957ee86e20
add PolyLearner: polynomial learning scheduler
2017-08-05 10:40:06 +00:00
cc89465adc
tweak comment
2017-08-05 10:39:59 +00:00
058a779f6c
remove some unused arguments
2017-08-05 10:39:32 +00:00
2e74c9160c
tweak CubicGB defaults
2017-08-03 03:39:25 +00:00
001a997e09
correction: batches, not epochs.
2017-08-03 03:38:07 +00:00
9138f73141
update mnist training
...
crank up the learning rate on emnist and use momentum with gradient clipping.
add a simple restart callback.
remove batch size adapation crap.
remove confidence measures.
2017-08-03 03:36:46 +00:00
7ac67fba8f
fix Bias layer
2017-08-02 11:37:39 +00:00
049d966710
remove biasing from Conv1Dper in favor of Bias layer
2017-08-02 11:30:08 +00:00
4ee2181691
add standalone Bias layer
2017-08-02 11:28:41 +00:00
e7c12c1f44
add ad-hoc weight-sharing method
2017-08-02 11:28:18 +00:00
f507dc10f8
remove DenseOneLess
...
not useful.
2017-08-02 10:52:26 +00:00
4d2251f69f
allow weight sharing; disableable gradient clearing
2017-08-02 10:29:58 +00:00
89fcd25962
fix wording
2017-08-02 07:00:33 +00:00
e4fa5bf63f
add positional control to convolution
2017-08-02 06:47:37 +00:00
5074dcb2aa
add Decimate and Undecimate layers
2017-08-02 06:47:15 +00:00
f28e8d3a54
add/remove comments and fix code style
2017-08-02 03:59:15 +00:00
8b3b8d8288
add rough 1D circular convolution
2017-08-02 03:58:24 +00:00
5d9efa71c1
move SquaredHalved to core
2017-07-25 22:14:17 +00:00
f43063928e
rename Linear activation to Identity layer
2017-07-25 22:12:27 +00:00
e5fd937ef6
remove cruft from YellowFin
...
i might just remove YellowFin itself because it isn't working for me.
2017-07-25 21:38:09 +00:00
2cf38d4ece
finally fix learning rate scheduling for real
...
okay, this is a disaster, but i think i've got it under control now.
the way batch-based learners now work is:
the epoch we're working towards is the truncated part of the epoch variable,
and how far we are into the epoch is the fractional part.
epoch starts at 1, so subtract by 1 when doing periodic operations.
2017-07-25 04:25:35 +00:00
93547b1974
add a linear (identity) activation for good measure
2017-07-25 04:24:32 +00:00
6933e21e0e
update mnist example
2017-07-23 04:23:57 +00:00
5183cd38f8
add GB output layers for classification
2017-07-23 03:55:19 +00:00
ee83ffa88e
add debug mode to MomentumClip to print norms
2017-07-23 03:54:37 +00:00
b20a34c2de
fix MomentumClip with nesterov enabled
2017-07-22 05:05:29 +00:00
be1795f6ed
use in-place (additive) form of filters
2017-07-21 21:02:47 +00:00
7c4ef4ad05
fix Softplus derivative
2017-07-21 21:02:04 +00:00
c2bb2cfcd5
add centered variant of RMS Prop
2017-07-21 20:20:42 +00:00
fb22f64716
tweak semantics etc.
2017-07-21 19:45:58 +00:00
217a19110a
fix case when no callbacks are given
2017-07-21 19:45:34 +00:00
4a108a10ae
allow MomentumClip, SineCLR, WaveCLR in config
2017-07-21 19:43:57 +00:00
e7a6974829
yeah probably not
2017-07-12 09:07:22 +00:00
928850c2a8
lower process priority
2017-07-11 12:44:26 +00:00
9f8ac737db
update mnist network
2017-07-11 12:11:47 +00:00