본문 바로가기

잡다한 IT/머신러닝 & 딥러닝

04-1. Multi-variable linear regression

■ 여러개의 입력이 있는 경우의 linear regression



x1_data , x2_data, x3_data , y_data


 - 여러 입력을 각각의 1행 5열 배열로 처리


hypothesis = x1*w1 + x2*w2 + x3*w3 +b

 

- 여러 입력에 대한 hypothesis 적용


cost_val, hy_val, _ = sess.run([cost,hypothesis,train], feed_dict= {x1:x1_data,x2:x2_data,x3:x3_data,Y:y_data})

 

- feed_dict 를 통하여 여러 입력을 대입





1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
# Lab04-1 Multi-variable linear regression
 
import tensorflow as tf
 
tf.set_random_seed(777)  # for reproducibility
 
 
x1_data = [73.93.89.96.73.]
 
x2_data = [80.88.91.98.66.]
 
x3_data = [75.93.90.100.70.]
 
 
y_data = [152.185.180.196.142.]
 
 
# placeholders for a tensor that will be always fed.
 
x1 = tf.placeholder(tf.float32)
 
x2 = tf.placeholder(tf.float32)
 
x3 = tf.placeholder(tf.float32)
 
 
= tf.placeholder(tf.float32)
 
 
w1 = tf.Variable(tf.random_normal([1]), name='weight1')
 
w2 = tf.Variable(tf.random_normal([1]), name='weight2')
 
w3 = tf.Variable(tf.random_normal([1]), name='weight3')
 
= tf.Variable(tf.random_normal([1]), name='bias')
 
 
hypothesis = x1 * w1 + x2 * w2 + x3 * w3 + b
 
print(hypothesis)
 
 
# cost/loss function
 
cost = tf.reduce_mean(tf.square(hypothesis - Y))
 
 
# Minimize. Need a very small learning rate for this data set
 
optimizer = tf.train.GradientDescentOptimizer(learning_rate=1e-5)
 
train = optimizer.minimize(cost)
 
 
# Launch the graph in a session.
 
sess = tf.Session()
 
# Initializes global variables in the graph.
 
sess.run(tf.global_variables_initializer())
 
 
for step in range(2001):
 
    cost_val, hy_val, _ = sess.run([cost, hypothesis, train],
 
                                   feed_dict={x1: x1_data, x2: x2_data, x3: x3_data, Y: y_data})
 
    if step % 10 == 0:
 
        print(step, "Cost: ", cost_val, "\nPrediction:\n", hy_val)
 
 
 
cs


반응형