■ sigmoid 함수를 사용한 binary classifier
hypothesis = tf.sigmoid(tf.matmul(X,W)+b)
- hypothesis 에서 tf.matmul(X,W)+b 에 sigmoid 함수를 적용
cost = -tf.reduce_mean(Y*tf.log(hypothesis)+(1-Y)*tf.log(1-hypothesis))
- Cross Entropy 를 코스트 함수로 사용한다.
- 1인 경우 Y*tf.log(hypothesis) 가 0으로 수렴하도록 학습
- 0인 경우 (1-Y)*tf.log(1-hypothesis) 가 0으로 수렴하도록 학습
train = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost)
- learning_rate 를 0.01 로 학습을 진행함
predicted = tf.cast(hypothesis > 0.5 , dtype=tf.float32)
- 0.5 보다 작으면 0, 0.5보다 크면 1로 predicted 변수를 설정한다
- predicted 데이터 타입을 tf.float32 로 설정
accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted,Y),dtype=tf.float32))
- tf.equal(predicted,Y) 가 같으면 1 다르면 0, 이를 더해서 평균을 낸다.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 | # Lab05-1 Logistic Regression Classifier import tensorflow as tf tf.set_random_seed(777) # for reproducibility x_data = [[1, 2], [2, 3], [3, 1], [4, 3], [5, 3], [6, 2]] y_data = [[0], [0], [0], [1], [1], [1]] # placeholders for a tensor that will be always fed. X = tf.placeholder(tf.float32, shape=[None, 2]) Y = tf.placeholder(tf.float32, shape=[None, 1]) W = tf.Variable(tf.random_normal([2, 1]), name='weight') b = tf.Variable(tf.random_normal([1]), name='bias') # Hypothesis using sigmoid: tf.div(1., 1. + tf.exp(tf.matmul(X, W))) hypothesis = tf.sigmoid(tf.matmul(X, W) + b) # cost/loss function cost = -tf.reduce_mean(Y * tf.log(hypothesis) + (1 - Y) * tf.log(1 - hypothesis)) train = tf.train.GradientDescentOptimizer(learning_rate=0.01).minimize(cost) # Accuracy computation # True if hypothesis>0.5 else False predicted = tf.cast(hypothesis > 0.5, dtype=tf.float32) accuracy = tf.reduce_mean(tf.cast(tf.equal(predicted, Y), dtype=tf.float32)) # Launch graph with tf.Session() as sess: # Initialize TensorFlow variables sess.run(tf.global_variables_initializer()) for step in range(10001): cost_val, _ = sess.run([cost, train], feed_dict={X: x_data, Y: y_data}) if step % 200 == 0: print(step, cost_val) # Accuracy report h, c, a = sess.run([hypothesis, predicted, accuracy], feed_dict={X: x_data, Y: y_data}) print("\nHypothesis: ", h, "\nCorrect (Y): ", c, "\nAccuracy: ", a) | cs |
반응형
'잡다한 IT > 머신러닝 & 딥러닝' 카테고리의 다른 글
06-1. Softmax classfier (0) | 2018.08.15 |
---|---|
05-2. Logistic regression diabetes (0) | 2018.08.15 |
04-3. Mulit-variable linear regression with file input (0) | 2018.08.15 |
04-2. Multi-variable linear regression with tf.matmul (0) | 2018.08.15 |
04-1. Multi-variable linear regression (0) | 2018.08.15 |