【fishexpert个人笔记】学习资料整理

深度学习
机器学习

#121

https://blog.csdn.net/walkerJong/article/details/7994326


#122

https://pytorch.org/tutorials/beginner/nlp/sequence_models_tutorial.html#sphx-glr-beginner-nlp-sequence-models-tutorial-py


#123

https://pytorch.org/docs/stable/torch.html#torch.mm


#124

pytorch 多GPU训练

https://www.zhihu.com/question/67726969


#125

pytorch 分布式训练


#126

https://github.com/uber/horovod


#127

#128

#129

-- coding:utf-8 --

import tensorflow as tf import numpy as np

样本个数

sample_num=5

设置迭代次数

epoch_num = 2

设置一个批次中包含样本个数

batch_size = 3

计算每一轮epoch中含有的batch个数

batch_total = int(sample_num/batch_size)+1

生成4个数据和标签

def generate_data(sample_num=sample_num): labels = np.asarray(range(0, sample_num)) images = np.random.random([sample_num, 224, 224, 3]) print(‘image size {},label size :{}’.format(images.shape, labels.shape)) return images,labels

def get_batch_data(batch_size=batch_size): images, label = generate_data() # 数据类型转换为tf.float32 images = tf.cast(images, tf.float32) label = tf.cast(label, tf.int32)

#从tensor列表中按顺序或随机抽取一个tensor准备放入文件名称队列
input_queue = tf.train.slice_input_producer([images, label], num_epochs=epoch_num, shuffle=False)

#从文件名称队列中读取文件准备放入文件队列
image_batch, label_batch = tf.train.batch(input_queue, batch_size=batch_size, num_threads=2, capacity=64, allow_smaller_final_batch=False)
return image_batch, label_batch

image_batch, label_batch = get_batch_data(batch_size=batch_size)

with tf.Session() as sess:

# 先执行初始化工作
sess.run(tf.global_variables_initializer())
sess.run(tf.local_variables_initializer())

# 开启一个协调器
coord = tf.train.Coordinator()
# 使用start_queue_runners 启动队列填充
threads = tf.train.start_queue_runners(sess, coord)

try:
    while not coord.should_stop():
        print '************'
        # 获取每一个batch中batch_size个样本和标签
        image_batch_v, label_batch_v = sess.run([image_batch, label_batch])
        print(image_batch_v.shape, label_batch_v)
except tf.errors.OutOfRangeError:  #如果读取到文件队列末尾会抛出此异常
    print("done! now lets kill all the threads……")
finally:
    # 协调器coord发出所有线程终止信号
    coord.request_stop()
    print('all threads are asked to stop!')
coord.join(threads) #把开启的线程加入主线程,等待threads结束
print('all threads are stopped!')

#130

https://blog.csdn.net/u012436149/article/details/53837651


#131

#132

TF gflie


#133

#134

https://blog.csdn.net/hu_guan_jie/article/details/78495297


#135

https://blog.csdn.net/guolindonggld/article/details/79255061


#136

#137

tf.expand_dims和tf.squeeze函数

tf.where https://blog.csdn.net/a_a_ron/article/details/79048446


#138

https://yq.aliyun.com/articles/67011?spm=a2c4e.11153940.blogcont31765.18.30781c4cZZNV6a


#139

5W2H分析法/PEST分析法/4P营销理论/RFM模型


#140

Google Analytics中对用户忠诚度的4个度量指标:Repeated Times、Recency、Length of Visit、Depth of Visit,即用户访问频率、最近访问时间、平均停留时间、平均访问页面数,这些指标可以直接从网站的点击流数据中计算得到,对所有的网站都适用,下面看一下这些指标的定义及如何计算得到

·访问频率:用户在一段时间内访问网站的次数,即每个用户Visits的个数;

·最近访问时间:用户最近访问网站的时间,因为这个指标是个时间点的概念,所以为了便于度量,一般取用户最近访问时间距当前的天数。

·平均停留时间:用户一段时间内每次访问的平均停留时间,即每个用户Time on Site的和/Visits的个数;

·平均访问页面数:用户一段时间内每次访问的平均浏览页面数,即每个用户Page Views的和/ Visits的个数。