Commit graph

31 commits

Author SHA1 Message Date
910facf98d move NLL to core 2017-08-05 10:59:05 +00:00
0b9c1fe117 allow SGDR to anneal optimizer's learning rate
e.g. YellowFin
2017-08-05 10:43:38 +00:00
dbd6c31ea5 fix final rate calculation 2017-08-05 10:43:18 +00:00
915b39d783 allow Optimizers to inspect Models (currently unused)
the thing that takes advantage of this may or may not be committed,
so this may or may not get reverted.
2017-08-05 10:41:35 +00:00
058a779f6c remove some unused arguments 2017-08-05 10:39:32 +00:00
001a997e09 correction: batches, not epochs. 2017-08-03 03:38:07 +00:00
7ac67fba8f fix Bias layer 2017-08-02 11:37:39 +00:00
4ee2181691 add standalone Bias layer 2017-08-02 11:28:41 +00:00
e7c12c1f44 add ad-hoc weight-sharing method 2017-08-02 11:28:18 +00:00
4d2251f69f allow weight sharing; disableable gradient clearing 2017-08-02 10:29:58 +00:00
f28e8d3a54 add/remove comments and fix code style 2017-08-02 03:59:15 +00:00
5d9efa71c1 move SquaredHalved to core 2017-07-25 22:14:17 +00:00
f43063928e rename Linear activation to Identity layer 2017-07-25 22:12:27 +00:00
2cf38d4ece finally fix learning rate scheduling for real
okay, this is a disaster, but i think i've got it under control now.

the way batch-based learners now work is:
the epoch we're working towards is the truncated part of the epoch variable,
and how far we are into the epoch is the fractional part.

epoch starts at 1, so subtract by 1 when doing periodic operations.
2017-07-25 04:25:35 +00:00
93547b1974 add a linear (identity) activation for good measure 2017-07-25 04:24:32 +00:00
be1795f6ed use in-place (additive) form of filters 2017-07-21 21:02:47 +00:00
7c4ef4ad05 fix Softplus derivative 2017-07-21 21:02:04 +00:00
c2bb2cfcd5 add centered variant of RMS Prop 2017-07-21 20:20:42 +00:00
fb22f64716 tweak semantics etc. 2017-07-21 19:45:58 +00:00
928850c2a8 lower process priority 2017-07-11 12:44:26 +00:00
112e263056 fix code i forgot to test, plus some tweaks 2017-07-11 11:36:11 +00:00
7bd5518650 note to self on how to handle generators 2017-07-11 11:23:27 +00:00
436f45fbb0 rewrite Ritual to reduce code duplication 2017-07-03 11:54:37 +00:00
6a3f047ddc rename alpha to lr where applicable 2017-07-02 05:39:51 +00:00
1b1184480a allow optimizers to adjust their own learning rate 2017-07-02 02:52:07 +00:00
22dc651cce move lament into core 2017-07-01 02:22:34 +00:00
7da93e93a8 move graph printing into Model class 2017-07-01 02:17:46 +00:00
69786b40a1 begin work on multiple input/output nodes 2017-07-01 00:44:56 +00:00
a7c4bdaa2e remove dead line and punctuate comment 2017-06-30 21:13:37 +00:00
c02fba01e2 various
use updated filenames.
don't use emnist by default.
tweak expando integer handling.
add some comments.
2017-06-26 00:16:51 +00:00
a770444199 shorten names 2017-06-25 22:08:07 +00:00
Renamed from optim_nn_core.py (Browse further)