■ 파일 처리 Softmax 구현
▶ 사용된 텐서플로우 API
tf.nn.softmax_cross_entropy_with_logits
tf.argmax
cost_i = tf.nn.softmax_cross_entropy_with_logits(logits=logits,labels=Y_one_hot)
- logits = tf.matmul(X,W)+ b 에서 바로 cost 함수를 얻는다.
- hypothesis = tf.nn.softmax(logits) 은 사용하지 않고 예측할 때 사용된다.
cost = tf.reduce_mean(cost_i)
- 코스트 함수의 평균을 구한다.
optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost)
- 학습을 진행한다.
prediction=tf.argmax(hypothesis,1)
- 예측을 진행한다. 1은 차원을 의미한다.(argmax 함수 공부 必)
correct_prediction = tf.equal(prediction,tf.argmax(Y_one_hot,1))
- 정답과 예측을 비교한다.
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
- 정확도를 구한다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 | # Lab06-2 Softmax_zoo_classifier import tensorflow as tf import numpy as np tf.set_random_seed(777) # for reproducibility # Predicting animal type based on various features xy = np.loadtxt('data-04-zoo.csv', delimiter=',', dtype=np.float32) x_data = xy[:, 0:-1] y_data = xy[:, [-1]] print(x_data.shape, y_data.shape) nb_classes = 7 # 0 ~ 6 X = tf.placeholder(tf.float32, [None, 16]) Y = tf.placeholder(tf.int32, [None, 1]) # 0 ~ 6 Y_one_hot = tf.one_hot(Y, nb_classes) # one hot print("one_hot", Y_one_hot) Y_one_hot = tf.reshape(Y_one_hot, [-1, nb_classes]) print("reshape", Y_one_hot) W = tf.Variable(tf.random_normal([16, nb_classes]), name='weight') b = tf.Variable(tf.random_normal([nb_classes]), name='bias') # tf.nn.softmax computes softmax activations # softmax = exp(logits) / reduce_sum(exp(logits), dim) logits = tf.matmul(X, W) + b hypothesis = tf.nn.softmax(logits) # Cross entropy cost/loss cost_i = tf.nn.softmax_cross_entropy_with_logits(logits=logits, labels=Y_one_hot) cost = tf.reduce_mean(cost_i) optimizer = tf.train.GradientDescentOptimizer(learning_rate=0.1).minimize(cost) prediction = tf.argmax(hypothesis, 1) correct_prediction = tf.equal(prediction, tf.argmax(Y_one_hot, 1)) accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32)) # Launch graph with tf.Session() as sess: sess.run(tf.global_variables_initializer()) for step in range(2000): sess.run(optimizer, feed_dict={X: x_data, Y: y_data}) if step % 100 == 0: loss, acc = sess.run([cost, accuracy], feed_dict={ X: x_data, Y: y_data}) print("Step: {:5}\tLoss: {:.3f}\tAcc: {:.2%}".format( step, loss, acc)) # Let's see if we can predict pred = sess.run(prediction, feed_dict={X: x_data}) # y_data: (N,1) = flatten => (N, ) matches pred.shape for p, y in zip(pred, y_data.flatten()): print("[{}] Prediction: {} True Y: {}".format(p == int(y), p, int(y))) | cs |
반응형
'잡다한 IT > 머신러닝 & 딥러닝' 카테고리의 다른 글
06-1. Softmax classfier (0) | 2018.08.15 |
---|---|
05-2. Logistic regression diabetes (0) | 2018.08.15 |
05-1. Logistic Regression Classifier (0) | 2018.08.15 |
04-3. Mulit-variable linear regression with file input (0) | 2018.08.15 |
04-2. Multi-variable linear regression with tf.matmul (0) | 2018.08.15 |