91cdea3b26
fix inequalities in HardClip
...
oldest trick in the book
2018-03-10 05:03:26 +01:00
a6519f5455
improve notes on dependencies
2018-03-09 10:17:31 +01:00
bd4f2a9478
fix missing import for ActivityRegularizer
2018-03-09 10:09:50 +01:00
9a45b26b7f
add rough stratified k-folding utility class
2018-03-08 02:41:45 +01:00
65bc9b8a6f
optionally allow gradients to passthru Input layer
2018-03-08 02:40:56 +01:00
4746103978
add HardClip activation layer
2018-03-08 02:40:42 +01:00
44cae4ad50
add LookupLearner
2018-03-07 01:58:17 +01:00
8abbb1e713
add NoiseInjector and NoiseMultiplier layers
2018-03-07 01:54:48 +01:00
604ffb9fa1
add variant of L1L2 regularization using averages
2018-03-07 01:53:40 +01:00
713fd2adbe
add experimental soft-clipped optimizers
2018-03-07 01:52:26 +01:00
3aa3b70a9f
add AMSgrad optimizer
2018-03-07 01:30:04 +01:00
0641c747c9
add Arcsinh activation
2018-03-07 01:29:48 +01:00
8ce2ec1ad4
add missing import
2018-02-10 11:28:43 +01:00
39bbf27860
add onehot utility function
2018-02-02 08:52:32 +01:00
e7783188bb
tweak float exports
2018-02-02 08:51:39 +01:00
169303813d
basic PEP 8 compliance
...
rip readability
2018-01-22 19:40:36 +00:00
c81ce0afbb
rename stuff and add a couple missing imports
2018-01-21 22:16:36 +00:00
bbdb91fcb1
merge and split modules into a package
2018-01-21 22:07:57 +00:00
db65fbdd62
add Neumann optimizer
2018-01-12 15:42:04 +00:00
1ebb897f14
use @ operator
2017-10-19 04:12:16 +00:00
a85ee67780
allow CLRs to use optimizer's learning rate
2017-10-19 04:03:44 +00:00
763246df98
add RMSpropCentered to model from config
2017-09-26 23:12:40 +00:00
9bb26b1ec5
add Huber loss
2017-09-25 16:37:52 +00:00
eb16377ba6
add Adagrad optimizer
2017-09-25 16:06:45 +00:00
c964f143d2
not true
2017-09-25 07:12:19 +00:00
5b6fd6259f
update example
2017-09-25 06:28:59 +00:00
a760c4841b
add fallback to optim.lr in AnnealingLearner
2017-09-25 06:10:54 +00:00
916c6fe1f0
assert that rituals have been prepared
2017-09-25 06:10:04 +00:00
615f43c550
support AddSign and PowerSign in config
2017-09-25 06:09:36 +00:00
fe54002671
remove DumbLearner
2017-09-25 06:09:07 +00:00
9a7ffe5f0d
add AddSign and PowerSign optimizers
2017-09-25 04:02:17 +00:00
5c83f063be
remove keras stuff so it won't show on google
...
bleh
2017-09-18 04:42:41 +00:00
d38e2076f0
allow multi-input and multi-output models
2017-09-16 18:28:05 +00:00
3386869b30
move actreg tweaking into if statement
...
i was getting division by zero.
2017-09-16 17:33:47 +00:00
65ba80bb96
skip over irrelevant nodes
2017-09-16 17:31:39 +00:00
dcbaef3032
use MomentumClip in warmup for stability
2017-09-16 17:30:52 +00:00
7878f94f43
auto-increment loss filenames to stop clobbering
2017-09-16 17:30:02 +00:00
e22316a4c9
move losses into Model and refactor methods
2017-09-16 17:05:25 +00:00
910facf98d
move NLL to core
2017-08-05 10:59:05 +00:00
0b9c1fe117
allow SGDR to anneal optimizer's learning rate
...
e.g. YellowFin
2017-08-05 10:43:38 +00:00
dbd6c31ea5
fix final rate calculation
2017-08-05 10:43:18 +00:00
915b39d783
allow Optimizers to inspect Models (currently unused)
...
the thing that takes advantage of this may or may not be committed,
so this may or may not get reverted.
2017-08-05 10:41:35 +00:00
de5af4f7f4
allow argument passthru to normalizer in _mr_make_norm
2017-08-05 10:40:39 +00:00
957ee86e20
add PolyLearner: polynomial learning scheduler
2017-08-05 10:40:06 +00:00
cc89465adc
tweak comment
2017-08-05 10:39:59 +00:00
058a779f6c
remove some unused arguments
2017-08-05 10:39:32 +00:00
2e74c9160c
tweak CubicGB defaults
2017-08-03 03:39:25 +00:00
001a997e09
correction: batches, not epochs.
2017-08-03 03:38:07 +00:00
9138f73141
update mnist training
...
crank up the learning rate on emnist and use momentum with gradient clipping.
add a simple restart callback.
remove batch size adapation crap.
remove confidence measures.
2017-08-03 03:36:46 +00:00
7ac67fba8f
fix Bias layer
2017-08-02 11:37:39 +00:00