Fix hyperparam training

This commit is contained in:
MTRNord 2022-12-06 19:50:42 +01:00
parent ff58efb398
commit 94ca84b4cd
3 changed files with 4 additions and 1 deletions

3
.vscode/settings.json vendored Normal file
View File

@ -0,0 +1,3 @@
{
"python.formatting.provider": "black"
}

View File

@ -5620,3 +5620,4 @@ spam If you have a Gmail account, a computer or a smartphone and you have 2 hour
spam I'll help the community how to earn $30k within 3 days and hours but you will reimburse me 10% of your dividend when you collect it. Note: only interested people should involve, Whatsapp +1 (561) 788 1421 immediately
spam Win up to $1000 in crypto trading when you invest with just the minimum of $50 Signup and start investing your crypto with. 💎NO STRESS 💎NO REFERRAL NEEDED!! 💎NO REGISTRATION FEE!!
spam If you love to earn from #BITCOIN try GTFx only interested persons No KYC, no VPN + 30+ $BTC daily withdraws (without KYC) $5,000 DEPOSIT BONUS👇
ham set_many_ordered_items takes an argument that is an iterator of types that implement `Into<AnyBase>`\nAll activitystreams types implement the Extends and ExtendsExt traits, which provide the `into_any_base` method, so you could write something like this:\n\n```rust\nlet v = items\n .into_iter()\n .map(|item| item.into_any_base())\n .collect::<Result<Vec<_>,_>>()?;\ncollection.set_many_ordered_items(v);\n```
Can't render this file because it contains an unexpected character in line 68 and column 14.

View File

@ -195,7 +195,6 @@ def train_hyperparamters(
The hyperparameter search is complete. The optimal number of units in the first densely-connected
layer is {best_hps.get('dense1')} and the optimal learning rate for the optimizer is {best_hps.get('learning_rate')}.
The optimal dropout rate is {best_hps.get('dropout')} and the optimal l2 rate is {best_hps.get('l2')}.
The optimal embedding_dim is {best_hps.get('embedding_dim')}.
"""
)