深度阅读

2021-06-17 记事

作者
作者
2023年08月22日
更新时间
25.66 分钟
阅读时间
0
阅读量

2021-06-17 记事

Focal Loss的定义

理论定义:Focal Loss可以看作是一个损失函数,它使容易分类的样本权重降低,而对难分类的样本权重增加。

pandas转json

result = df.to_json(orient=”records”)
parsed = json.loads(result)
json.dumps(parsed, indent=4)

[
{
“col 1”: “a”,
“col 2”: “b”
},
{
“col 1”: “c”,
“col 2”: “d”
}
]

keras也不错

● 团体

● #Pytorch Lightning

好多都不错的 https://github.com/PyTorchLightning

https://github.com/PyTorchLightning/lightning-flash

PyTorchLightning/lightning-flash:用于快速原型设计、基线、微调和解决深度学习问题的任务集合。

https://github.com/PyTorchLightning/lightning-tutorials

https://github.com/PyTorchLightning/lightning-tutorials

https://github.com/PyTorchLightning/deep-learning-project-template

https://github.com/PyTorchLightning/lightning-bolts

● Fastai

https://github.com/fastai

https://github.com/fastai/fastbook

● github主题

● pytorch-lightning

https://github.com/topics/pytorch-lightning

● 笔记本

https://github.com/topics/jupyter-notebook

● 学习资源

● PyTorch Tutorial

PyTorch Tutorial for Deep Learning Researchers https://github.com/yunjey/pytorch-tutorial

● Notebooks using the Hugging Face libraries hugs

https://github.com/huggingface/notebooks

● 可以借鉴

● NN Template

https://github.com/lucmos/nn-template Generic template to bootstrap your PyTorch project. Click on  and avoid writing boilerplate code for:

● lightning-hydra-template

https://github.com/PyTorchLightning/lightning-flash

Deep Learning project template best practices with Pytorch Lightning, Hydra, Tensorboard. https://github.com/lkhphuc/lightning-hydra-template

My repo for training neural nets using pytorch-lightning and hydra https://github.com/Erlemar/pytorch_tempest

● keras官方示例

https://keras.io/examples/

● 模型优化压缩

● 蒸馏资源

https://github.com/karanchahal/distiller

● net2net

https://github.com/CompVis/net2net 具有条件可逆神经网络的网络到网络转换​

● Knowledge Distillation Toolkit

This toolkit allows you to compress a machine learning model using knowledge distillation. To use this toolkit, you need to provide a teacher model, a student model, data loaders for training and validation, and an inference pipeline. This toolkit is based on PyTorch and PyTorch Lightning , so teacher and student models need to be PyTorch neural network modules, and data loaders need to be PyTorch data loaders. https://github.com/georgian-io/Knowledge-Distillation-Toolkit

相关标签

博客作者

热爱技术,乐于分享,持续学习。专注于Web开发、系统架构设计和人工智能领域。