Connor Olding
1ebb897f14
use @ operator
2017-10-19 04:12:16 +00:00
Connor Olding
a85ee67780
allow CLRs to use optimizer's learning rate
2017-10-19 04:03:44 +00:00
Connor Olding
9bb26b1ec5
add Huber loss
2017-09-25 16:37:52 +00:00
Connor Olding
eb16377ba6
add Adagrad optimizer
2017-09-25 16:06:45 +00:00
Connor Olding
c964f143d2
not true
2017-09-25 07:12:19 +00:00
Connor Olding
a760c4841b
add fallback to optim.lr in AnnealingLearner
2017-09-25 06:10:54 +00:00
Connor Olding
916c6fe1f0
assert that rituals have been prepared
2017-09-25 06:10:04 +00:00
Connor Olding
d38e2076f0
allow multi-input and multi-output models
2017-09-16 18:28:05 +00:00
Connor Olding
65ba80bb96
skip over irrelevant nodes
2017-09-16 17:31:39 +00:00
Connor Olding
e22316a4c9
move losses into Model and refactor methods
2017-09-16 17:05:25 +00:00
Connor Olding
910facf98d
move NLL to core
2017-08-05 10:59:05 +00:00
Connor Olding
0b9c1fe117
allow SGDR to anneal optimizer's learning rate
...
e.g. YellowFin
2017-08-05 10:43:38 +00:00
Connor Olding
dbd6c31ea5
fix final rate calculation
2017-08-05 10:43:18 +00:00
Connor Olding
915b39d783
allow Optimizers to inspect Models (currently unused)
...
the thing that takes advantage of this may or may not be committed,
so this may or may not get reverted.
2017-08-05 10:41:35 +00:00
Connor Olding
058a779f6c
remove some unused arguments
2017-08-05 10:39:32 +00:00
Connor Olding
001a997e09
correction: batches, not epochs.
2017-08-03 03:38:07 +00:00
Connor Olding
7ac67fba8f
fix Bias layer
2017-08-02 11:37:39 +00:00
Connor Olding
4ee2181691
add standalone Bias layer
2017-08-02 11:28:41 +00:00
Connor Olding
e7c12c1f44
add ad-hoc weight-sharing method
2017-08-02 11:28:18 +00:00
Connor Olding
4d2251f69f
allow weight sharing; disableable gradient clearing
2017-08-02 10:29:58 +00:00
Connor Olding
f28e8d3a54
add/remove comments and fix code style
2017-08-02 03:59:15 +00:00
Connor Olding
5d9efa71c1
move SquaredHalved to core
2017-07-25 22:14:17 +00:00
Connor Olding
f43063928e
rename Linear activation to Identity layer
2017-07-25 22:12:27 +00:00
Connor Olding
2cf38d4ece
finally fix learning rate scheduling for real
...
okay, this is a disaster, but i think i've got it under control now.
the way batch-based learners now work is:
the epoch we're working towards is the truncated part of the epoch variable,
and how far we are into the epoch is the fractional part.
epoch starts at 1, so subtract by 1 when doing periodic operations.
2017-07-25 04:25:35 +00:00
Connor Olding
93547b1974
add a linear (identity) activation for good measure
2017-07-25 04:24:32 +00:00
Connor Olding
be1795f6ed
use in-place (additive) form of filters
2017-07-21 21:02:47 +00:00
Connor Olding
7c4ef4ad05
fix Softplus derivative
2017-07-21 21:02:04 +00:00
Connor Olding
c2bb2cfcd5
add centered variant of RMS Prop
2017-07-21 20:20:42 +00:00
Connor Olding
fb22f64716
tweak semantics etc.
2017-07-21 19:45:58 +00:00
Connor Olding
928850c2a8
lower process priority
2017-07-11 12:44:26 +00:00
Connor Olding
112e263056
fix code i forgot to test, plus some tweaks
2017-07-11 11:36:11 +00:00
Connor Olding
7bd5518650
note to self on how to handle generators
2017-07-11 11:23:27 +00:00
Connor Olding
436f45fbb0
rewrite Ritual to reduce code duplication
2017-07-03 11:54:37 +00:00
Connor Olding
6a3f047ddc
rename alpha to lr where applicable
2017-07-02 05:39:51 +00:00
Connor Olding
1b1184480a
allow optimizers to adjust their own learning rate
2017-07-02 02:52:07 +00:00
Connor Olding
22dc651cce
move lament into core
2017-07-01 02:22:34 +00:00
Connor Olding
7da93e93a8
move graph printing into Model class
2017-07-01 02:17:46 +00:00
Connor Olding
69786b40a1
begin work on multiple input/output nodes
2017-07-01 00:44:56 +00:00
Connor Olding
a7c4bdaa2e
remove dead line and punctuate comment
2017-06-30 21:13:37 +00:00
Connor Olding
c02fba01e2
various
...
use updated filenames.
don't use emnist by default.
tweak expando integer handling.
add some comments.
2017-06-26 00:16:51 +00:00
Connor Olding
a770444199
shorten names
2017-06-25 22:08:07 +00:00