If I understand a neural net correctly - it is just a graph of nodes and edges where each node in a given layer is connected to every node in the following layer.
The nodes have weights and the edges have weights? And you do some multiplication of these values to get a prediction.

Given a 2 layer model (with 2 input nodes 'a & b' and 1 output node 'c'), this is what I am after:
| source | destination | value |
+--------+-------------+-------+
| a | c | 0.01 |
| b | c | 0.03 |
But when I call model.weights (albeit on a more complex model) I get a bunch of keyless np arrays with no way to tell which values belong to which nodes.
[<tf.Variable 'dense_1/kernel:0' shape=(8, 12) dtype=float32, numpy=
array([[ 0.31751466, 0.20620143, 0.09791961, -0.08813753, 0.2515421 ,
-0.53187364, -0.15702713, 0.0267031 , -0.48389524, -0.13240823,
0.39453653, -0.39209265],
[ 0.31308496, -0.38468117, -0.03970708, 0.2889997 , 0.03803336,
0.04796927, -0.5140167 , 0.04645742, 0.08511442, -0.09435426,
0.03105392, -0.17520434],
[ 0.05365064, -0.05402106, -0.02931813, 0.13150737, 0.08898667,
0.20198704, 0.28716817, 0.21081768, -0.09572094, 0.14665389,
-0.3083644 , -0.47491354],
[-0.36734372, -0.12509695, -0.16984704, -0.19592582, 0.24023046,
-0.28856498, 0.11084742, 0.12101128, 0.00146453, -0.4996385 ,
-0.23521361, 0.24130017],
[ 0.21538568, -0.08531788, -0.32247233, -0.09213281, -0.39390212,
0.05042276, 0.22282743, -0.11438937, -0.00920196, 0.12748554,
-0.02741051, -0.12594655],
[ 0.3057384 , -0.20449257, 0.16837521, 0.21493798, -0.14034544,
0.45435148, -0.0548106 , 0.07033874, 0.39275315, -0.3332669 ,
-0.10222256, 0.14674312],
[ 0.36575058, 0.07205153, -0.14340317, -0.57348907, 0.7167731 ,
-0.29590985, 0.6351 , -0.6615748 , -0.23423046, -0.1065482 ,
0.7084621 , 0.02146828],
[-0.14760445, -0.4926324 , 0.30986223, 0.4067813 , 0.32313958,
-0.39595246, 0.12813015, -0.3088377 , -0.7285755 , 0.6085407 ,
0.39351743, -0.09248918]], dtype=float32)>,
<tf.Variable 'dense_1/bias:0' shape=(12,) dtype=float32, numpy=
array([-1.1890789 , 0. , -0.43765482, 0.5292001 , -0.94201744,
0.44064137, -0.5898111 , 0.8738893 , -0.62948394, 0.9394948 ,
0.47176355, 0. ], dtype=float32)>,
<tf.Variable 'dense_2/kernel:0' shape=(12, 8) dtype=float32, numpy=
array([[ 0.18743241, -0.04509293, 0.26035592, -0.40080604, -0.2120734 ,
0.0604641 , 0.17452721, -0.25245216],
[-0.4116977 , 0.4476785 , 0.13495606, 0.38070595, -0.16811815,
-0.5323667 , -0.41471216, 0.49056184],
[-0.43843648, -0.01767761, 0.03876654, 0.279591 , -0.64866304,
0.4605058 , 0.50288963, 0.46865177],
[-0.50431 , 0.26749972, -0.4822985 , 0.11643535, 0.34190154,
0.28961414, -0.19484225, 0.32788265],
[-0.4659909 , 0.12863334, -0.17177017, 0.27696657, -0.08261362,
0.1787579 , -0.49217325, -0.419283 ],
[-0.31586087, 0.4421215 , -0.35133213, -0.40784043, 0.3213457 ,
0.08262701, -0.20723267, -0.4305911 ],
[-0.32226318, -0.3479017 , -0.48984393, -0.19052912, 0.27398133,
-0.18631694, -0.42036086, -0.31824118],
[-0.04223084, -0.38938865, -0.33997327, -0.7986885 , -0.12062006,
-0.37880445, 0.06364141, 0.41674942],
[-0.07699671, -1.0260301 , -0.38287994, 0.46872973, -0.32630473,
0.37103057, 0.06274027, -0.25317484],
[-0.11334842, 0.29602957, 0.01759415, 0.07748368, -0.0767558 ,
0.13787462, -0.31502756, 0.17331126],
[-0.5030543 , -0.23578712, -0.38978124, 0.01187875, -0.02882512,
-0.5208091 , -0.4208508 , -0.08294159],
[ 0.04435921, 0.545004 , 0.07590699, 0.21470094, -0.46099266,
-0.25307545, -0.31362575, 0.3284188 ]], dtype=float32)>,
<tf.Variable 'dense_2/bias:0' shape=(8,) dtype=float32, numpy=
array([ 0. , 1.3254918 , -0.18484406, -0.0136466 , 1.2459729 ,
-1.331188 , -0.01439124, 0.9184486 ], dtype=float32)>,
<tf.Variable 'dense_3/kernel:0' shape=(8, 1) dtype=float32, numpy=
array([[-0.27390796],
[-0.40990734],
[-0.12878264],
[-0.43434066],
[-0.04099607],
[ 0.57922167],
[ 0.3830525 ],
[-0.47695825]], dtype=float32)>, <tf.Variable 'dense_3/bias:0' shape=(1,) dtype=float32, numpy=array([-1.3391492], dtype=float32)>]
Is there a JSON/dictionary-like way to get what I am after?
The "sources" and "destinations" of those edges don't have names like "a" and "b", they're just the kth neuron of the nth layer. The weights, then, are just an array. For example, weights[n][i][j] might be the weight of the edge connecting the ith neuron of layer n to the jth neuron of layer n+1. In this paradigm, the weights of your textbook example would look like
[[[ 0.8 0.4 0.3 ] [ 0.2 0.9 0.5 ]]
[[ 0.3 0.5 0.9 ]]]
When you take into account the fact that each neuron can have a bias as well as incoming weights, and that different layers' different numbers of neurons would make the 3D array ragged (which is inconvenient), you might find that the most convenient way to store it is as a structure that contains several 2D arrays (each one containing the weights for one pair of layers) and several 1D arrays (each containing the biases for one layer), all of different sizes... which is exactly what the dump you provided shows.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With