Github项目推荐 | Deepy – 基于 Numpy 的小巧优雅简单的(Python)深度学习库 2019-01-29

只是为了好玩,就用python编写了一个深度学习库。

它使用numpy进行计算。 API类似于PyTorch的API。

Github项目地址:https://github.com/kaszperro/deepy

(注:划线链接请点击文末【阅读原文】进行访问)


示例

在示例目录中有一个线性分类器,其准确率超过96%。


顺序模型的创建:

from deepy.module import Linear, Sequentialfrom deepy.autograd.activations import Softmax, ReLU
my_model = Sequential(
   Linear(28 * 28, 300),
   ReLU(),
   Linear(300, 300),
   ReLU(),
   Linear(300, 10),
   Softmax()
   )

损失:

from deepy.module import Linear
from deepy.autograd.losses import CrossEntropyLoss, MSELoss
from deepy.variable import Variable
import numpy as np

my_model = Linear(10, 10)

loss1 = CrossEntropyLoss()
loss2 = MSELoss()


good_output = Variable(np.zeros((10,10)))
model_input = Variable(np.ones((10,10)))
model_output = my_model(model_input)

error = loss1(good_output, model_output)# now you can propagate error backwards:error.backward()

优化:

from deepy.module import Linear
from deepy.autograd.losses import CrossEntropyLoss, MSELoss
from deepy.variable import Variable
from deepy.autograd.optimizers import SGD
import numpy as np


my_model = Linear(10, 10)

loss1 = CrossEntropyLoss()
loss2 = MSELoss()

optimizer1 = SGD(my_model.get_variables_list())

good_output = Variable(np.zeros((10,10)))
model_input = Variable(np.ones((10,10)))
model_output = my_model(model_input)

error = loss1(good_output, model_output)

# now you can propagate error backwards:
error.backward()

# and then optimizer can update variables:
optimizer1.zero_grad()
optimizer1.step()

项目地址:https://github.com/kaszperro/deepy

【AI求职百题斩 – 每日一题】

赶紧来看看今天的题目吧!

想知道正确答案?

点击公众号菜单栏【每日一题】→【每日一题】在公众号回复“0129”即可答题获取!

点击阅读原文,查看更多内容

标签: