Aside from having a name that sounds like it came straight out of Neuromancer, why is everyone so excited about Neural Networks?
Imagine trying to map a complex pattern to some outcome. Maybe you're trying to recognize whether an image is a dog, or a blueberry muffin.
Aside from having a name that sounds like it came straight out of Neuromancer, why is everyone so excited about Neural Networks?
Imagine trying to map a complex pattern to some outcome. Maybe you're trying to recognize whether an image is a dog, or a blueberry muffin.
Have you taken a course in dog-ology?
Have you taken a course in dog-ology? Probably not.
You've seen dogs, you've seen muffins, and somehow your brain has found an unknown way to tell the difference. What about letters/numbers?
You've seen dogs, you've seen muffins, and somehow your brain has found an unknown way to tell the difference. What about letters/numbers?
You've seen dogs, you've seen muffins, and somehow your brain has found an unknown way to tell the difference. What about letters/numbers?
That's what neural networks are trying to imitate - that process of taking one set of sensory inputs, and through repetition, finding patterns that map to some internal set of labels.
You've seen dogs, you've seen muffins, and somehow your brain has found an unknown way to tell the difference. What about letters/numbers?
That's what neural networks are trying to imitate - that process of taking one set of sensory inputs, and through repetition, finding patterns that map to some internal set of labels. But, in order to understand neural networks, we should do a little foundational history on them.
Neural networks (in their most basic form) are actually one of the oldest (if not THE oldest) machine learning tool still in use today.
Invented by McCulloch and Pitts in 1943*, the perceptron (the most fundamental element of a neural network) conceptually predate Alan Turing's machine.
* McCulloch, W. S. and Pitts, W. 1943. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5:115–133..
Neural networks (in their most basic form) are actually one of the oldest (if not THE oldest) machine learning tool still in use today.
Invented by McCulloch and Pitts in 1943*, the perceptron (the most fundamental element of a neural network) conceptually predate Alan Turing's machine.
* McCulloch, W. S. and Pitts, W. 1943. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5:115–133..
You might ask yourself, why would anyone bother with designing an algorithm that takes hours of compute time on a high-end machine TODAY when at the time, they lacked even a rudimentary computer?
Neural networks (in their most basic form) are actually one of the oldest (if not THE oldest) machine learning tool still in use today.
Invented by McCulloch and Pitts in 1943*, the perceptron (the most fundamental element of a neural network) conceptually predate Alan Turing's machine.
* McCulloch, W. S. and Pitts, W. 1943. A logical calculus of the ideas immanent in nervous activity. Bulletin of Mathematical Biophysics 5:115–133..
You might ask yourself, why would anyone bother with designing an algorithm that takes hours of compute time on a high-end machine TODAY when at the time, they lacked even a rudimentary computer? The intent was to build a model of a single neuron.
In order to understand how the intuition for how a perceptron works mathematically, having a basic understanding of the biological mechanics of a neuron is useful.
In order to understand how the intuition for how a perceptron works mathematically, having a basic understanding of the biological mechanics of a neuron is useful.
* Source: wikipedia. No, they didn't endorse my usage of this diagram.
In order to understand how the intuition for how a perceptron works mathematically, having a basic understanding of the biological mechanics of a neuron is useful.
* Source: wikipedia. No, they didn't endorse my usage of this diagram.
* Moar JPEG
Ok, so how do we model this process using only the mathematical tools we have available?
Ok, so how do we model this process using only the mathematical tools we have available?
Ok, so how do we model this process using only the mathematical tools we have available?
Ok, so how do we model this process using only the mathematical tools we have available?
Rules
output=y∈{0,1}
inputs=xn∈0,1, and |X|=N
Threshold value=Θ
σ(X)=1 ifN∑k=1xk>Θ, else=0
Put another way:
Put another way:
We're going to:
Put another way:
We're going to:
* This was a slight modification by Frank Rosenblatt who built the first computer-based perceptron
Put another way:
We're going to:
* This was a slight modification by Frank Rosenblatt who built the first computer-based perceptron
Put another way:
We're going to:
* This was a slight modification by Frank Rosenblatt who built the first computer-based perceptron
Two things to notice:
Two things to notice:
Two things to notice:
we now have an activation function that replaces Θ.
T? Why is that there? That's an in-built assumption that learning takes time. It also is critical in how we go about solving this thing.
You might have wondered: how in the world do we solve this thing?
The answer:
You might have wondered: how in the world do we solve this thing?
The answer: we guess. We start with some random set of weights.
You might have wondered: how in the world do we solve this thing?
The answer: we guess. We start with some random set of weights.
Then we update, using information from our accuracy to infer how well our weights are informing those guesses. Eventually, we'll get something meaningful
You might have wondered: how in the world do we solve this thing?
The answer: we guess. We start with some random set of weights.
Then we update, using information from our accuracy to infer how well our weights are informing those guesses. Eventually, we'll get something meaningful
This is why the change from the original step-wise function is so important:
You might have wondered: how in the world do we solve this thing?
The answer: we guess. We start with some random set of weights.
Then we update, using information from our accuracy to infer how well our weights are informing those guesses. Eventually, we'll get something meaningful
This is why the change from the original step-wise function is so important:
The linear activation function buys us something that the simple step-wise function could not:
You might have wondered: how in the world do we solve this thing?
The answer: we guess. We start with some random set of weights.
Then we update, using information from our accuracy to infer how well our weights are informing those guesses. Eventually, we'll get something meaningful
This is why the change from the original step-wise function is so important:
The linear activation function buys us something that the simple step-wise function could not: differentiability.
You might have wondered: how in the world do we solve this thing?
The answer: we guess. We start with some random set of weights.
Then we update, using information from our accuracy to infer how well our weights are informing those guesses. Eventually, we'll get something meaningful
This is why the change from the original step-wise function is so important:
The linear activation function buys us something that the simple step-wise function could not: differentiability.
Lucky for us, we can define 12∗SSE as our loss function, and then ∂L∂wti is fairly easy.
for each training sample... in each iteration
Lucky for us, we can define 12∗SSE as our loss function, and then ∂L∂wti is fairly easy.
for each training sample... in each iteration
1.) z=wt0+wt1x1…wtnxn=(Wt)Tx.
wt0 is a scalar (bias). You will sometimes see this as bt
Lucky for us, we can define 12∗SSE as our loss function, and then ∂L∂wti is fairly easy.
for each training sample... in each iteration
1.) z=wt0+wt1x1…wtnxn=(Wt)Tx.
wt0 is a scalar (bias). You will sometimes see this as bt
2.) ϕ(z(i))=ˆy(i)
Lucky for us, we can define 12∗SSE as our loss function, and then ∂L∂wti is fairly easy.
for each training sample... in each iteration
1.) z=wt0+wt1x1…wtnxn=(Wt)Tx.
wt0 is a scalar (bias). You will sometimes see this as bt
2.) ϕ(z(i))=ˆy(i)
3.) η=learning rate
Lucky for us, we can define 12∗SSE as our loss function, and then ∂L∂wti is fairly easy.
for each training sample... in each iteration
1.) z=wt0+wt1x1…wtnxn=(Wt)Tx.
wt0 is a scalar (bias). You will sometimes see this as bt
2.) ϕ(z(i))=ˆy(i)
3.) η=learning rate
Loss function: J(wti)=12∑i(y(i)−ϕ(z)(i))2
Lucky for us, we can define 12∗SSE as our loss function, and then ∂L∂wti is fairly easy.
for each training sample... in each iteration
1.) z=wt0+wt1x1…wtnxn=(Wt)Tx.
wt0 is a scalar (bias). You will sometimes see this as bt
2.) ϕ(z(i))=ˆy(i)
3.) η=learning rate
Loss function: J(wti)=12∑i(y(i)−ϕ(z)(i))2
∂J∂wtj=−∑(y(i)−ϕ(z)(i))x(i)j, Wt+1=Wt−η∇J(Wt)
Let's see this in action:
Let's see this in action:
You'll notice that the linear 'activation function' produces a linear decision boundary.
You'll notice that the linear 'activation function' produces a linear decision boundary.
You'll notice that the linear 'activation function' produces a linear decision boundary.
So what's the big deal? We could do that, and more, with SVMs.
Unlike with SVM, perceptrons improve over time.
You'll notice that the linear 'activation function' produces a linear decision boundary.
So what's the big deal? We could do that, and more, with SVMs.
Unlike with SVM, perceptrons improve over time. Because a perceptron can hypothetically continue to improve so long as it keeps seeing any loss, the only upper limit to a perceptron's performance is how flexible the activation function is.
You'll notice that the linear 'activation function' produces a linear decision boundary.
So what's the big deal? We could do that, and more, with SVMs.
Unlike with SVM, perceptrons improve over time. Because a perceptron can hypothetically continue to improve so long as it keeps seeing any loss, the only upper limit to a perceptron's performance is how flexible the activation function is.
But there's more than that: because we're using derivatives to update the weights, we can link a bunch of neurons together and use the chain rule to optimize the model weights W as they work together to find the pattern that maps X→Y.
You'll notice that the linear 'activation function' produces a linear decision boundary.
So what's the big deal? We could do that, and more, with SVMs.
Unlike with SVM, perceptrons improve over time. Because a perceptron can hypothetically continue to improve so long as it keeps seeing any loss, the only upper limit to a perceptron's performance is how flexible the activation function is.
But there's more than that: because we're using derivatives to update the weights, we can link a bunch of neurons together and use the chain rule to optimize the model weights W as they work together to find the pattern that maps X→Y.
Well what do you get when you glue a bunch of neurons together?
You'll notice that the linear 'activation function' produces a linear decision boundary.
So what's the big deal? We could do that, and more, with SVMs.
Unlike with SVM, perceptrons improve over time. Because a perceptron can hypothetically continue to improve so long as it keeps seeing any loss, the only upper limit to a perceptron's performance is how flexible the activation function is.
But there's more than that: because we're using derivatives to update the weights, we can link a bunch of neurons together and use the chain rule to optimize the model weights W as they work together to find the pattern that maps X→Y.
Well what do you get when you glue a bunch of neurons together? A brain!*
By breaking down a complex mapping task into a series of steps, we can use large collections of modified perceptrons to universally approximate any functional form.
By breaking down a complex mapping task into a series of steps, we can use large collections of modified perceptrons to universally approximate any functional form.
* with a liiitle more calculus
By breaking down a complex mapping task into a series of steps, we can use large collections of modified perceptrons to universally approximate any functional form.
* with a liiitle more calculus
By breaking down a complex mapping task into a series of steps, we can use large collections of modified perceptrons to universally approximate any functional form.
* with a liiitle more calculus
Source: Imperial College
Layer seems to imply we could have multiple sets of perceptrons?
Layer seems to imply we could have multiple sets of perceptrons?
Layer seems to imply we could have multiple sets of perceptrons?
All we do is make the y0, y1, y2 feed into a new set of perceptrons and have those new perceptrons find the patterns in the output of our old ones.
Layer seems to imply we could have multiple sets of perceptrons?
All we do is make the y0, y1, y2 feed into a new set of perceptrons and have those new perceptrons find the patterns in the output of our old ones.
that makes the output a little more complex, because now ˆy is equal to σ2(σ1(X))
Layer seems to imply we could have multiple sets of perceptrons?
All we do is make the y0, y1, y2 feed into a new set of perceptrons and have those new perceptrons find the patterns in the output of our old ones.
that makes the output a little more complex, because now ˆy is equal to σ2(σ1(X))
Where σi() is the perceptron weighting and activation function for layer i
How do we update our weights using this new function? The main difference is that we need a way of creating an error for EVERY layer. This requires...
How do we update our weights using this new function? The main difference is that we need a way of creating an error for EVERY layer. This requires...
F′(g(x))=f′(g(x))g′(x)
How do we update our weights using this new function? The main difference is that we need a way of creating an error for EVERY layer. This requires...
F′(g(x))=f′(g(x))g′(x)
For the output layer, we can define our error (δLj) by
δLj=∂C∂aLjσ′(zLj)
How do we update our weights using this new function? The main difference is that we need a way of creating an error for EVERY layer. This requires...
F′(g(x))=f′(g(x))g′(x)
For the output layer, we can define our error (δLj) by
δLj=∂C∂aLjσ′(zLj)
We can compute everything in here
δLj=∂C∂aLjσ′(zLj)
∂C∂aLj=(aLj−yj) (for our cost function)
δLj=∂C∂aLjσ′(zLj)
∂C∂aLj=(aLj−yj) (for our cost function)
we already have zLj, and plugging it into σ′ is not difficult. The vector form of this is...
δL=(aL−y)∘σ′(zL)
Which we can also compute.
Now all that's left is to find the derivatives for the other layer errors - however, those are going to be functions of the output error- let's look at some arbitrary layer l∈{1,…,L} where l is the layer before l+1
δl=((wl+1)Tδl+1)∘σ′(zl)
Where wl+1≡weight matrix for layer l+1
Now all that's left is to find the derivatives for the other layer errors - however, those are going to be functions of the output error- let's look at some arbitrary layer l∈{1,…,L} where l is the layer before l+1
δl=((wl+1)Tδl+1)∘σ′(zl)
Where wl+1≡weight matrix for layer l+1
Now, we can use what we found to calculate δL, which can be plugged in to find δL−1 and so on, until we reach the first layer.
Now all that's left is to find the derivatives for the other layer errors - however, those are going to be functions of the output error- let's look at some arbitrary layer l∈{1,…,L} where l is the layer before l+1
δl=((wl+1)Tδl+1)∘σ′(zl)
Where wl+1≡weight matrix for layer l+1
Now, we can use what we found to calculate δL, which can be plugged in to find δL−1 and so on, until we reach the first layer.
But what we really need is ∂C∂wlj to update the weights.
In order to update the weights, we really need to know how our costs are changing as a result of our chosen weights. This is actually super simple to do given what we already know:
Or,
In order to update the weights, we really need to know how our costs are changing as a result of our chosen weights. This is actually super simple to do given what we already know:
Or,
Where ainput is the input to the weight for layer l and δoutput is the error of the output from the weight wl
In order to update the weights, we really need to know how our costs are changing as a result of our chosen weights. This is actually super simple to do given what we already know:
Or,
Where ainput is the input to the weight for layer l and δoutput is the error of the output from the weight wl
Thus, the MSE for the guess ˆy of y will trickle through the weights and update them as the algorithm learns.
In order to update the weights, we really need to know how our costs are changing as a result of our chosen weights. This is actually super simple to do given what we already know:
Or,
Where ainput is the input to the weight for layer l and δoutput is the error of the output from the weight wl
Thus, the MSE for the guess ˆy of y will trickle through the weights and update them as the algorithm learns. The error from the output trickles back through the layers, propagating changes across all the weights at once
Let's see this in action:
* Source: Medium.com
You can think of this process as finding the bottom of a bowl by rolling a ball (with no momentum) down the side of it. The method we used before is called gradient descent, but there are others (we'll see those later.)
You can think of this process as finding the bottom of a bowl by rolling a ball (with no momentum) down the side of it. The method we used before is called gradient descent, but there are others (we'll see those later.)
As the ball gets close to the bottom of the bowl, it will slow down and eventually stop at the lowest point.
You can think of this process as finding the bottom of a bowl by rolling a ball (with no momentum) down the side of it. The method we used before is called gradient descent, but there are others (we'll see those later.)
As the ball gets close to the bottom of the bowl, it will slow down and eventually stop at the lowest point.
Unlike other machine learning models we've seen so far, Neural Networks learn over what are called epochs.
Unlike other machine learning models we've seen so far, Neural Networks learn over what are called epochs.
Unlike other machine learning models we've seen so far, Neural Networks learn over what are called epochs.
dfn Epoch: One full pass of the model over your training data set (updating the weights for each pass)
They do this because they can still learn from data they have already seen before, so long as the network output has a non-zero error and they haven't reached the optimization point for the data they've seen.
Let's watch some neural networks do their thing. Go here and play around with the tensorflow app: playground.
They are extremely flexible - and as we know already, that means they are prone to overfitting. Even more so than the algorithms you've seen so far.
They also have a ton of components to consider. You have an activation function to choose, you have a number of layers, you have the number of nodes IN those layers...
this makes cross-validation much more difficult. They are much more reliant on either guessing and checking (bad) or having experience with them (better).
Further, because they climb to an optimum, there's no guarantee that the solution you find is the best one you could have found for your data. It's highly dependent on how you're updating your weights (optimization method) and where you started.
this makes cross-validation much more difficult. They are much more reliant on either guessing and checking (bad) or having experience with them (better).
Further, because they climb to an optimum, there's no guarantee that the solution you find is the best one you could have found for your data. It's highly dependent on how you're updating your weights (optimization method) and where you started.
How we've updated our weights so far has used something called gradient descent.
this makes cross-validation much more difficult. They are much more reliant on either guessing and checking (bad) or having experience with them (better).
Further, because they climb to an optimum, there's no guarantee that the solution you find is the best one you could have found for your data. It's highly dependent on how you're updating your weights (optimization method) and where you started.
How we've updated our weights so far has used something called gradient descent.
These two reasons are why the methods often used to minimize a cost function are super bizarre.
Rather than calculate the error across all points in the sample, only calculate the error for a single point at a time to speed up the process and avoid overfit/getting stuck at the same time.
Let's see how some of these weird functions act next to gradient descent red ball.
Rather than calculate the error across all points in the sample, only calculate the error for a single point at a time to speed up the process and avoid overfit/getting stuck at the same time.
Let's see how some of these weird functions act next to gradient descent red ball.
It is perfectly possible to code a neural network by hand, but by far the most common tool used to write neural networks for production is to use Tensorflow and keras.
It is perfectly possible to code a neural network by hand, but by far the most common tool used to write neural networks for production is to use Tensorflow and keras.
The main downside of these is that they're both written under the hood in python, which means it may take some wrangling to get tensorflow for Rstudio to work on your computer.
It is perfectly possible to code a neural network by hand, but by far the most common tool used to write neural networks for production is to use Tensorflow and keras.
The main downside of these is that they're both written under the hood in python, which means it may take some wrangling to get tensorflow for Rstudio to work on your computer.
Both of these work together and use a strange representation of data called a 'tensor'.
It is perfectly possible to code a neural network by hand, but by far the most common tool used to write neural networks for production is to use Tensorflow and keras.
The main downside of these is that they're both written under the hood in python, which means it may take some wrangling to get tensorflow for Rstudio to work on your computer.
Both of these work together and use a strange representation of data called a 'tensor'. I don't have enough time to explain what a tensor is, but luckily this adorable human does a much better job than I ever could: tensors
It is perfectly possible to code a neural network by hand, but by far the most common tool used to write neural networks for production is to use Tensorflow and keras.
The main downside of these is that they're both written under the hood in python, which means it may take some wrangling to get tensorflow for Rstudio to work on your computer.
Both of these work together and use a strange representation of data called a 'tensor'. I don't have enough time to explain what a tensor is, but luckily this adorable human does a much better job than I ever could: tensors
Tensorflow lets us build a model sequentially, layer by layer. This is very similar to how tidymodels lets you build your data process.
Like in tidymodels, tensorflow starts with a model-type object.
Like in tidymodels, tensorflow starts with a model-type object. For our purposes, this is keras_model_sequential()
.
Like in tidymodels, tensorflow starts with a model-type object. For our purposes, this is keras_model_sequential()
.
This tells tensorflow we're going to build our neural network sequentially.
Like in tidymodels, tensorflow starts with a model-type object. For our purposes, this is keras_model_sequential()
.
This tells tensorflow we're going to build our neural network sequentially.
Just like parsnip, we build an abstration of a model that will be fed data to train on later.
First, let's remind ourselves what our neural network is trying to do:
First, let's remind ourselves what our neural network is trying to do:
First, let's remind ourselves what our neural network is trying to do:
Just like with all machine learning, we need to really understand our data to do a good job predicting. Let's take a closer look at one observation in our dataset.
mnist <- dataset_fashion_mnist()mnist$train$x
You don't trust me, fine. We can look at our stuff too.
How about the first 25
Ok, that's great, but remember we need to set up this problem so we can turn a picture into an outcome (the numbers 0-9). How do we do it?
Ok, that's great, but remember we need to set up this problem so we can turn a picture into an outcome (the numbers 0-9). How do we do it?
Well, we can squish the data so that each of these matrices in just a very long row of Xs. (28*28 = 784 different variables.)
Ok, that's great, but remember we need to set up this problem so we can turn a picture into an outcome (the numbers 0-9). How do we do it?
Well, we can squish the data so that each of these matrices in just a very long row of Xs. (28*28 = 784 different variables.)
In practice, we can do this with color images as well, with each RGB value acting as a different pixel score.
Ok- let's do this in order.
model <- keras_model_sequential()
Ok- let's do this in order.
model <- keras_model_sequential() %>% layer_flatten(input_shape = c(28,28))
Ok- let's do this in order.
model <- keras_model_sequential() %>% layer_flatten(input_shape = c(28,28))
This layer takes our 28x28 pixel image and flatttens it into a vector so the model can read it. Think of this like a recipe step.
Ok- let's do this in order.
model <- keras_model_sequential() %>% layer_flatten(input_shape = c(28,28)) %>% layer_dense(units = 128, activation = "relu")
Ok- let's do this in order.
model <- keras_model_sequential() %>% layer_flatten(input_shape = c(28,28)) %>% layer_dense(units = 128, activation = "relu")
This is our first layer in our Neural network! It creates 128 different 'neurons' (or perceptrons) and sets their activation function to be a relu.
Ok- let's do this in order.
model <- keras_model_sequential() %>% layer_flatten(input_shape = c(28,28)) %>% layer_dense(units = 128, activation = "relu")
This is our first layer in our Neural network! It creates 128 different 'neurons' (or perceptrons) and sets their activation function to be a relu.
Ok- let's do this in order.
model <- keras_model_sequential() %>% layer_flatten(input_shape = c(28,28)) %>% layer_dense(units = 128, activation = "relu", name= 'hiddenlayer') %>% layer_dense(10, activation = "softmax", name = 'outputlayer')
Ok- let's do this in order.
model <- keras_model_sequential() %>% layer_flatten(input_shape = c(28,28)) %>% layer_dense(units = 128, activation = "relu", name= 'hiddenlayer') %>% layer_dense(10, activation = "softmax", name = 'outputlayer')
This is our output layer, and the activation it's using is simply - "classify this into one of 10 (number of nodes) classes, and guess the one with the largest probability."
Ok- let's do this in order.
model <- keras_model_sequential() %>% layer_flatten(input_shape = c(28,28)) %>% layer_dense(units = 128, activation = "relu", name= 'hiddenlayer') %>% layer_dense(10, activation = "softmax", name = 'outputlayer')
This is our output layer, and the activation it's using is simply - "classify this into one of 10 (number of nodes) classes, and guess the one with the largest probability."
And that's a neural network! I gave the layers names, because it will be easier to see what they do if I come back later. How do I do that?
summary(model)
you can see how many parameters they have and check - it should be 784(28pix*28pix)∗128(number of hidden neurons)+128(w0 or b)=100,480
you can see how many parameters they have and check - it should be 784(28pix*28pix)∗128(number of hidden neurons)+128(w0 or b)=100,480
you can see how many parameters they have and check - it should be 784(28pix*28pix)∗128(number of hidden neurons)+128(w0 or b)=100,480
That is what's called a:
you can see how many parameters they have and check - it should be 784(28pix*28pix)∗128(number of hidden neurons)+128(w0 or b)=100,480
That is what's called a:
of parameters.
Now we need to prepare the model.
Now we need to prepare the model.
This is done with a compile
command. We need to give that command a loss function to use, an optimizer (we'll use Adam which I described earlier) and any metrics we're interested in (let's look at accuracy.)
Now we need to prepare the model.
This is done with a compile
command. We need to give that command a loss function to use, an optimizer (we'll use Adam which I described earlier) and any metrics we're interested in (let's look at accuracy.)
model %>% compile( loss = "sparse_categorical_crossentropy", optimizer = "Adam", metrics = "accuracy")
Now, we can use keras/tf to fit our model. First, let's separate our data.
mnist$train$x = mnist$train$x/255 #normalizes to (0,1)mnist$test$x = mnist$test$x/255 #normalizes to (0,1)
I want you to see what it looks like when it runs, so I'm going to move to a new slide.
model %>% fit( x = mnist$train$x, y = mnist$train$y, epochs = 5, validation_split = 0.3 )
And that's how you run a neural network! Ours is getting a ~ 92% accuracy rate classifying pictures into one of 10 different categories. If you run this for 40 epochs, you'll get ~ 95% accuracy.
And that's how you run a neural network! Ours is getting a ~ 92% accuracy rate classifying pictures into one of 10 different categories. If you run this for 40 epochs, you'll get ~ 95% accuracy.
It's super easy.
And that's how you run a neural network! Ours is getting a ~ 92% accuracy rate classifying pictures into one of 10 different categories. If you run this for 40 epochs, you'll get ~ 95% accuracy.
It's super easy.
It's too easy.
And that's how you run a neural network! Ours is getting a ~ 92% accuracy rate classifying pictures into one of 10 different categories. If you run this for 40 epochs, you'll get ~ 95% accuracy.
It's super easy.
It's too easy.
Be careful... remember all of those things you've learned about model selection? They matter 10x more with deep learning.
There are TONS of other cool structures, but they all work exactly like these under the hood - they just add fancy math that allows us to process our inputs in a slightly different way.
There are TONS of other cool structures, but they all work exactly like these under the hood - they just add fancy math that allows us to process our inputs in a slightly different way.
CNN: Convolutional Neural Network. You know that flattening step? That kind of sucks. We're actually losing information on the location of the pixels when we do that. If only we could hold onto that information...
There are TONS of other cool structures, but they all work exactly like these under the hood - they just add fancy math that allows us to process our inputs in a slightly different way.
CNN: Convolutional Neural Network. You know that flattening step? That kind of sucks. We're actually losing information on the location of the pixels when we do that. If only we could hold onto that information...
There are TONS of other cool structures, but they all work exactly like these under the hood - they just add fancy math that allows us to process our inputs in a slightly different way.
CNN: Convolutional Neural Network. You know that flattening step? That kind of sucks. We're actually losing information on the location of the pixels when we do that. If only we could hold onto that information...
By using filters and feeding in data in a special way, we can hold onto the positional information of our numbers.
What about timeseries? Wouldn't it be nice to be able to tell a neural network the 'order' the data should occur in. Handled by a class of estimators called RNN (or Transformers, more recently).
What about timeseries? Wouldn't it be nice to be able to tell a neural network the 'order' the data should occur in. Handled by a class of estimators called RNN (or Transformers, more recently).
LSTM/GRU: Long-Short Term Memory/ Gated Recurrent Unit. Both of these adapt our neurons so they can 'hold onto' data further in the learning process and change it as they like. The diagram is ugly, but...
What about timeseries? Wouldn't it be nice to be able to tell a neural network the 'order' the data should occur in. Handled by a class of estimators called RNN (or Transformers, more recently).
LSTM/GRU: Long-Short Term Memory/ Gated Recurrent Unit. Both of these adapt our neurons so they can 'hold onto' data further in the learning process and change it as they like. The diagram is ugly, but...
where each cell is now
Date | Time | Global_active_power | Global_reactive_power | Voltage | Global_intensity | Sub_metering_1 | Sub_metering_2 | Sub_metering_3 | datetime |
---|---|---|---|---|---|---|---|---|---|
16/12/2006 | 17:24:00 | 4.22 | 0.418 | 235 | 18.4 | 0 | 1 | 17 | 2006-12-16 17:24:00 |
16/12/2006 | 17:25:00 | 5.36 | 0.436 | 234 | 23 | 0 | 1 | 16 | 2006-12-16 17:25:00 |
16/12/2006 | 17:26:00 | 5.37 | 0.498 | 233 | 23 | 0 | 2 | 17 | 2006-12-16 17:26:00 |
16/12/2006 | 17:27:00 | 5.39 | 0.502 | 234 | 23 | 0 | 1 | 17 | 2006-12-16 17:27:00 |
16/12/2006 | 17:28:00 | 3.67 | 0.528 | 236 | 15.8 | 0 | 1 | 17 | 2006-12-16 17:28:00 |
16/12/2006 | 17:29:00 | 3.52 | 0.522 | 235 | 15 | 0 | 2 | 17 | 2006-12-16 17:29:00 |
16/12/2006 | 17:30:00 | 3.7 | 0.52 | 235 | 15.8 | 0 | 1 | 17 | 2006-12-16 17:30:00 |
16/12/2006 | 17:31:00 | 3.7 | 0.52 | 235 | 15.8 | 0 | 1 | 17 | 2006-12-16 17:31:00 |
16/12/2006 | 17:32:00 | 3.67 | 0.51 | 234 | 15.8 | 0 | 1 | 17 | 2006-12-16 17:32:00 |
16/12/2006 | 17:33:00 | 3.66 | 0.51 | 234 | 15.8 | 0 | 2 | 16 | 2006-12-16 17:33:00 |
16/12/2006 | 17:34:00 | 4.45 | 0.498 | 233 | 19.6 | 0 | 1 | 17 | 2006-12-16 17:34:00 |
16/12/2006 | 17:35:00 | 5.41 | 0.47 | 233 | 23.2 | 0 | 1 | 17 | 2006-12-16 17:35:00 |
16/12/2006 | 17:36:00 | 5.22 | 0.478 | 233 | 22.4 | 0 | 1 | 16 | 2006-12-16 17:36:00 |
16/12/2006 | 17:37:00 | 5.27 | 0.398 | 233 | 22.6 | 0 | 2 | 17 | 2006-12-16 17:37:00 |
16/12/2006 | 17:38:00 | 4.05 | 0.422 | 235 | 17.6 | 0 | 1 | 17 | 2006-12-16 17:38:00 |
16/12/2006 | 17:39:00 | 3.38 | 0.282 | 237 | 14.2 | 0 | 0 | 17 | 2006-12-16 17:39:00 |
16/12/2006 | 17:40:00 | 3.27 | 0.152 | 237 | 13.8 | 0 | 0 | 17 | 2006-12-16 17:40:00 |
16/12/2006 | 17:41:00 | 3.43 | 0.156 | 237 | 14.4 | 0 | 0 | 17 | 2006-12-16 17:41:00 |
16/12/2006 | 17:42:00 | 3.27 | 0 | 237 | 13.8 | 0 | 0 | 18 | 2006-12-16 17:42:00 |
16/12/2006 | 17:43:00 | 3.73 | 0 | 236 | 16.4 | 0 | 0 | 17 | 2006-12-16 17:43:00 |
16/12/2006 | 17:44:00 | 5.89 | 0 | 233 | 25.4 | 0 | 0 | 16 | 2006-12-16 17:44:00 |
16/12/2006 | 17:45:00 | 7.71 | 0 | 231 | 33.2 | 0 | 0 | 17 | 2006-12-16 17:45:00 |
16/12/2006 | 17:46:00 | 7.03 | 0 | 232 | 30.6 | 0 | 0 | 16 | 2006-12-16 17:46:00 |
16/12/2006 | 17:47:00 | 5.17 | 0 | 234 | 22 | 0 | 0 | 17 | 2006-12-16 17:47:00 |
16/12/2006 | 17:48:00 | 4.47 | 0 | 235 | 19.4 | 0 | 0 | 17 | 2006-12-16 17:48:00 |
16/12/2006 | 17:49:00 | 3.25 | 0 | 237 | 13.6 | 0 | 0 | 17 | 2006-12-16 17:49:00 |
16/12/2006 | 17:50:00 | 3.24 | 0 | 236 | 13.6 | 0 | 0 | 17 | 2006-12-16 17:50:00 |
16/12/2006 | 17:51:00 | 3.23 | 0 | 236 | 13.6 | 0 | 0 | 17 | 2006-12-16 17:51:00 |
16/12/2006 | 17:52:00 | 3.26 | 0 | 235 | 13.8 | 0 | 0 | 17 | 2006-12-16 17:52:00 |
16/12/2006 | 17:53:00 | 3.18 | 0 | 235 | 13.4 | 0 | 0 | 17 | 2006-12-16 17:53:00 |
16/12/2006 | 17:54:00 | 2.72 | 0 | 235 | 11.6 | 0 | 0 | 17 | 2006-12-16 17:54:00 |
16/12/2006 | 17:55:00 | 3.76 | 0.076 | 234 | 16.4 | 0 | 0 | 17 | 2006-12-16 17:55:00 |
16/12/2006 | 17:56:00 | 4.34 | 0.09 | 234 | 18.4 | 0 | 0 | 16 | 2006-12-16 17:56:00 |
16/12/2006 | 17:57:00 | 4.51 | 0 | 234 | 19.2 | 0 | 0 | 17 | 2006-12-16 17:57:00 |
16/12/2006 | 17:58:00 | 4.06 | 0.2 | 235 | 17.6 | 0 | 0 | 17 | 2006-12-16 17:58:00 |
16/12/2006 | 17:59:00 | 2.47 | 0.058 | 237 | 10.4 | 0 | 0 | 17 | 2006-12-16 17:59:00 |
16/12/2006 | 18:00:00 | 2.79 | 0.18 | 238 | 11.8 | 0 | 0 | 18 | 2006-12-16 18:00:00 |
16/12/2006 | 18:01:00 | 2.62 | 0.144 | 238 | 11 | 0 | 0 | 17 | 2006-12-16 18:01:00 |
16/12/2006 | 18:02:00 | 2.77 | 0.118 | 238 | 11.6 | 0 | 0 | 17 | 2006-12-16 18:02:00 |
16/12/2006 | 18:03:00 | 3.74 | 0.108 | 237 | 16.4 | 0 | 16 | 18 | 2006-12-16 18:03:00 |
16/12/2006 | 18:04:00 | 4.93 | 0.202 | 235 | 21 | 0 | 37 | 16 | 2006-12-16 18:04:00 |
16/12/2006 | 18:05:00 | 6.05 | 0.192 | 233 | 26.2 | 0 | 37 | 17 | 2006-12-16 18:05:00 |
16/12/2006 | 18:06:00 | 6.75 | 0.186 | 232 | 29 | 0 | 36 | 17 | 2006-12-16 18:06:00 |
16/12/2006 | 18:07:00 | 6.47 | 0.144 | 232 | 27.8 | 0 | 37 | 16 | 2006-12-16 18:07:00 |
16/12/2006 | 18:08:00 | 6.31 | 0.116 | 232 | 27 | 0 | 36 | 17 | 2006-12-16 18:08:00 |
16/12/2006 | 18:09:00 | 4.46 | 0.136 | 235 | 19 | 0 | 37 | 16 | 2006-12-16 18:09:00 |
16/12/2006 | 18:10:00 | 3.4 | 0.148 | 236 | 15 | 0 | 22 | 18 | 2006-12-16 18:10:00 |
16/12/2006 | 18:11:00 | 3.09 | 0.152 | 237 | 13.8 | 0 | 12 | 17 | 2006-12-16 18:11:00 |
16/12/2006 | 18:12:00 | 3.73 | 0.144 | 236 | 16.4 | 0 | 27 | 17 | 2006-12-16 18:12:00 |
16/12/2006 | 18:13:00 | 2.31 | 0.16 | 237 | 9.6 | 0 | 1 | 17 | 2006-12-16 18:13:00 |
16/12/2006 | 18:14:00 | 2.39 | 0.158 | 237 | 10 | 0 | 1 | 17 | 2006-12-16 18:14:00 |
16/12/2006 | 18:15:00 | 4.6 | 0.1 | 234 | 21.4 | 0 | 20 | 17 | 2006-12-16 18:15:00 |
16/12/2006 | 18:16:00 | 4.52 | 0.076 | 234 | 19.6 | 0 | 9 | 17 | 2006-12-16 18:16:00 |
16/12/2006 | 18:17:00 | 4.2 | 0.082 | 234 | 17.8 | 0 | 1 | 17 | 2006-12-16 18:17:00 |
16/12/2006 | 18:18:00 | 4.47 | 0 | 233 | 19.2 | 0 | 1 | 16 | 2006-12-16 18:18:00 |
16/12/2006 | 18:19:00 | 2.85 | 0 | 236 | 12 | 0 | 1 | 17 | 2006-12-16 18:19:00 |
16/12/2006 | 18:20:00 | 2.93 | 0 | 235 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:20:00 |
16/12/2006 | 18:21:00 | 2.94 | 0 | 236 | 12.4 | 0 | 2 | 17 | 2006-12-16 18:21:00 |
16/12/2006 | 18:22:00 | 2.93 | 0 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:22:00 |
16/12/2006 | 18:23:00 | 2.93 | 0 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:23:00 |
16/12/2006 | 18:24:00 | 3.45 | 0 | 235 | 15.2 | 0 | 1 | 17 | 2006-12-16 18:24:00 |
16/12/2006 | 18:25:00 | 4.87 | 0 | 234 | 20.8 | 0 | 1 | 17 | 2006-12-16 18:25:00 |
16/12/2006 | 18:26:00 | 4.87 | 0 | 234 | 20.8 | 0 | 1 | 17 | 2006-12-16 18:26:00 |
16/12/2006 | 18:27:00 | 4.87 | 0 | 234 | 20.8 | 0 | 1 | 17 | 2006-12-16 18:27:00 |
16/12/2006 | 18:28:00 | 3.18 | 0 | 236 | 13.8 | 0 | 1 | 17 | 2006-12-16 18:28:00 |
16/12/2006 | 18:29:00 | 2.92 | 0 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:29:00 |
16/12/2006 | 18:30:00 | 2.93 | 0 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:30:00 |
16/12/2006 | 18:31:00 | 2.91 | 0.05 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:31:00 |
16/12/2006 | 18:32:00 | 2.61 | 0.052 | 235 | 11 | 0 | 1 | 17 | 2006-12-16 18:32:00 |
16/12/2006 | 18:33:00 | 2.71 | 0.162 | 235 | 11.6 | 0 | 0 | 17 | 2006-12-16 18:33:00 |
16/12/2006 | 18:34:00 | 3.54 | 0.086 | 234 | 15.6 | 0 | 1 | 16 | 2006-12-16 18:34:00 |
16/12/2006 | 18:35:00 | 6.07 | 0 | 232 | 26.4 | 0 | 27 | 17 | 2006-12-16 18:35:00 |
16/12/2006 | 18:36:00 | 4.54 | 0 | 234 | 19.4 | 0 | 1 | 17 | 2006-12-16 18:36:00 |
16/12/2006 | 18:37:00 | 4.41 | 0 | 232 | 18.8 | 0 | 1 | 16 | 2006-12-16 18:37:00 |
16/12/2006 | 18:38:00 | 2.91 | 0.048 | 234 | 13 | 0 | 1 | 17 | 2006-12-16 18:38:00 |
16/12/2006 | 18:39:00 | 2.33 | 0.054 | 235 | 9.8 | 0 | 1 | 17 | 2006-12-16 18:39:00 |
16/12/2006 | 18:40:00 | 2.26 | 0.054 | 235 | 9.6 | 0 | 1 | 17 | 2006-12-16 18:40:00 |
16/12/2006 | 18:41:00 | 2.27 | 0.054 | 235 | 9.6 | 0 | 1 | 17 | 2006-12-16 18:41:00 |
16/12/2006 | 18:42:00 | 2.26 | 0.054 | 235 | 9.6 | 0 | 1 | 17 | 2006-12-16 18:42:00 |
16/12/2006 | 18:43:00 | 2.19 | 0.068 | 236 | 9.2 | 0 | 1 | 17 | 2006-12-16 18:43:00 |
16/12/2006 | 18:44:00 | 2.98 | 0.166 | 235 | 13.2 | 0 | 1 | 17 | 2006-12-16 18:44:00 |
16/12/2006 | 18:45:00 | 4.2 | 0.174 | 234 | 17.8 | 0 | 1 | 17 | 2006-12-16 18:45:00 |
16/12/2006 | 18:46:00 | 4.2 | 0.186 | 234 | 17.8 | 0 | 1 | 16 | 2006-12-16 18:46:00 |
16/12/2006 | 18:47:00 | 4.22 | 0.178 | 234 | 18 | 0 | 1 | 17 | 2006-12-16 18:47:00 |
16/12/2006 | 18:48:00 | 2.79 | 0.188 | 235 | 12 | 0 | 2 | 17 | 2006-12-16 18:48:00 |
16/12/2006 | 18:49:00 | 2.54 | 0.088 | 235 | 10.8 | 0 | 4 | 17 | 2006-12-16 18:49:00 |
16/12/2006 | 18:50:00 | 2.5 | 0.08 | 234 | 10.6 | 0 | 3 | 17 | 2006-12-16 18:50:00 |
16/12/2006 | 18:51:00 | 2.34 | 0.07 | 234 | 10 | 0 | 1 | 16 | 2006-12-16 18:51:00 |
16/12/2006 | 18:52:00 | 2.32 | 0 | 233 | 9.8 | 0 | 0 | 17 | 2006-12-16 18:52:00 |
16/12/2006 | 18:53:00 | 2.45 | 0 | 234 | 10.6 | 0 | 1 | 17 | 2006-12-16 18:53:00 |
16/12/2006 | 18:54:00 | 4.3 | 0 | 232 | 18.4 | 0 | 1 | 16 | 2006-12-16 18:54:00 |
16/12/2006 | 18:55:00 | 4.23 | 0.09 | 232 | 18.2 | 0 | 1 | 17 | 2006-12-16 18:55:00 |
16/12/2006 | 18:56:00 | 4.23 | 0.09 | 232 | 18.2 | 0 | 2 | 16 | 2006-12-16 18:56:00 |
16/12/2006 | 18:57:00 | 3.92 | 0.084 | 233 | 17 | 0 | 1 | 17 | 2006-12-16 18:57:00 |
16/12/2006 | 18:58:00 | 4.22 | 0.09 | 232 | 18 | 0 | 1 | 17 | 2006-12-16 18:58:00 |
16/12/2006 | 18:59:00 | 4.22 | 0.09 | 232 | 18.2 | 0 | 1 | 16 | 2006-12-16 18:59:00 |
16/12/2006 | 19:00:00 | 4.07 | 0.088 | 232 | 17.4 | 0 | 1 | 17 | 2006-12-16 19:00:00 |
16/12/2006 | 19:01:00 | 3.61 | 0.09 | 232 | 15.6 | 0 | 2 | 16 | 2006-12-16 19:01:00 |
16/12/2006 | 19:02:00 | 3.46 | 0.09 | 233 | 14.8 | 0 | 1 | 17 | 2006-12-16 19:02:00 |
16/12/2006 | 19:03:00 | 3.43 | 0.09 | 232 | 14.8 | 0 | 1 | 16 | 2006-12-16 19:03:00 |
Date | Time | Global_active_power | Global_reactive_power | Voltage | Global_intensity | Sub_metering_1 | Sub_metering_2 | Sub_metering_3 | datetime |
---|---|---|---|---|---|---|---|---|---|
16/12/2006 | 17:24:00 | 4.22 | 0.418 | 235 | 18.4 | 0 | 1 | 17 | 2006-12-16 17:24:00 |
16/12/2006 | 17:25:00 | 5.36 | 0.436 | 234 | 23 | 0 | 1 | 16 | 2006-12-16 17:25:00 |
16/12/2006 | 17:26:00 | 5.37 | 0.498 | 233 | 23 | 0 | 2 | 17 | 2006-12-16 17:26:00 |
16/12/2006 | 17:27:00 | 5.39 | 0.502 | 234 | 23 | 0 | 1 | 17 | 2006-12-16 17:27:00 |
16/12/2006 | 17:28:00 | 3.67 | 0.528 | 236 | 15.8 | 0 | 1 | 17 | 2006-12-16 17:28:00 |
16/12/2006 | 17:29:00 | 3.52 | 0.522 | 235 | 15 | 0 | 2 | 17 | 2006-12-16 17:29:00 |
16/12/2006 | 17:30:00 | 3.7 | 0.52 | 235 | 15.8 | 0 | 1 | 17 | 2006-12-16 17:30:00 |
16/12/2006 | 17:31:00 | 3.7 | 0.52 | 235 | 15.8 | 0 | 1 | 17 | 2006-12-16 17:31:00 |
16/12/2006 | 17:32:00 | 3.67 | 0.51 | 234 | 15.8 | 0 | 1 | 17 | 2006-12-16 17:32:00 |
16/12/2006 | 17:33:00 | 3.66 | 0.51 | 234 | 15.8 | 0 | 2 | 16 | 2006-12-16 17:33:00 |
16/12/2006 | 17:34:00 | 4.45 | 0.498 | 233 | 19.6 | 0 | 1 | 17 | 2006-12-16 17:34:00 |
16/12/2006 | 17:35:00 | 5.41 | 0.47 | 233 | 23.2 | 0 | 1 | 17 | 2006-12-16 17:35:00 |
16/12/2006 | 17:36:00 | 5.22 | 0.478 | 233 | 22.4 | 0 | 1 | 16 | 2006-12-16 17:36:00 |
16/12/2006 | 17:37:00 | 5.27 | 0.398 | 233 | 22.6 | 0 | 2 | 17 | 2006-12-16 17:37:00 |
16/12/2006 | 17:38:00 | 4.05 | 0.422 | 235 | 17.6 | 0 | 1 | 17 | 2006-12-16 17:38:00 |
16/12/2006 | 17:39:00 | 3.38 | 0.282 | 237 | 14.2 | 0 | 0 | 17 | 2006-12-16 17:39:00 |
16/12/2006 | 17:40:00 | 3.27 | 0.152 | 237 | 13.8 | 0 | 0 | 17 | 2006-12-16 17:40:00 |
16/12/2006 | 17:41:00 | 3.43 | 0.156 | 237 | 14.4 | 0 | 0 | 17 | 2006-12-16 17:41:00 |
16/12/2006 | 17:42:00 | 3.27 | 0 | 237 | 13.8 | 0 | 0 | 18 | 2006-12-16 17:42:00 |
16/12/2006 | 17:43:00 | 3.73 | 0 | 236 | 16.4 | 0 | 0 | 17 | 2006-12-16 17:43:00 |
16/12/2006 | 17:44:00 | 5.89 | 0 | 233 | 25.4 | 0 | 0 | 16 | 2006-12-16 17:44:00 |
16/12/2006 | 17:45:00 | 7.71 | 0 | 231 | 33.2 | 0 | 0 | 17 | 2006-12-16 17:45:00 |
16/12/2006 | 17:46:00 | 7.03 | 0 | 232 | 30.6 | 0 | 0 | 16 | 2006-12-16 17:46:00 |
16/12/2006 | 17:47:00 | 5.17 | 0 | 234 | 22 | 0 | 0 | 17 | 2006-12-16 17:47:00 |
16/12/2006 | 17:48:00 | 4.47 | 0 | 235 | 19.4 | 0 | 0 | 17 | 2006-12-16 17:48:00 |
16/12/2006 | 17:49:00 | 3.25 | 0 | 237 | 13.6 | 0 | 0 | 17 | 2006-12-16 17:49:00 |
16/12/2006 | 17:50:00 | 3.24 | 0 | 236 | 13.6 | 0 | 0 | 17 | 2006-12-16 17:50:00 |
16/12/2006 | 17:51:00 | 3.23 | 0 | 236 | 13.6 | 0 | 0 | 17 | 2006-12-16 17:51:00 |
16/12/2006 | 17:52:00 | 3.26 | 0 | 235 | 13.8 | 0 | 0 | 17 | 2006-12-16 17:52:00 |
16/12/2006 | 17:53:00 | 3.18 | 0 | 235 | 13.4 | 0 | 0 | 17 | 2006-12-16 17:53:00 |
16/12/2006 | 17:54:00 | 2.72 | 0 | 235 | 11.6 | 0 | 0 | 17 | 2006-12-16 17:54:00 |
16/12/2006 | 17:55:00 | 3.76 | 0.076 | 234 | 16.4 | 0 | 0 | 17 | 2006-12-16 17:55:00 |
16/12/2006 | 17:56:00 | 4.34 | 0.09 | 234 | 18.4 | 0 | 0 | 16 | 2006-12-16 17:56:00 |
16/12/2006 | 17:57:00 | 4.51 | 0 | 234 | 19.2 | 0 | 0 | 17 | 2006-12-16 17:57:00 |
16/12/2006 | 17:58:00 | 4.06 | 0.2 | 235 | 17.6 | 0 | 0 | 17 | 2006-12-16 17:58:00 |
16/12/2006 | 17:59:00 | 2.47 | 0.058 | 237 | 10.4 | 0 | 0 | 17 | 2006-12-16 17:59:00 |
16/12/2006 | 18:00:00 | 2.79 | 0.18 | 238 | 11.8 | 0 | 0 | 18 | 2006-12-16 18:00:00 |
16/12/2006 | 18:01:00 | 2.62 | 0.144 | 238 | 11 | 0 | 0 | 17 | 2006-12-16 18:01:00 |
16/12/2006 | 18:02:00 | 2.77 | 0.118 | 238 | 11.6 | 0 | 0 | 17 | 2006-12-16 18:02:00 |
16/12/2006 | 18:03:00 | 3.74 | 0.108 | 237 | 16.4 | 0 | 16 | 18 | 2006-12-16 18:03:00 |
16/12/2006 | 18:04:00 | 4.93 | 0.202 | 235 | 21 | 0 | 37 | 16 | 2006-12-16 18:04:00 |
16/12/2006 | 18:05:00 | 6.05 | 0.192 | 233 | 26.2 | 0 | 37 | 17 | 2006-12-16 18:05:00 |
16/12/2006 | 18:06:00 | 6.75 | 0.186 | 232 | 29 | 0 | 36 | 17 | 2006-12-16 18:06:00 |
16/12/2006 | 18:07:00 | 6.47 | 0.144 | 232 | 27.8 | 0 | 37 | 16 | 2006-12-16 18:07:00 |
16/12/2006 | 18:08:00 | 6.31 | 0.116 | 232 | 27 | 0 | 36 | 17 | 2006-12-16 18:08:00 |
16/12/2006 | 18:09:00 | 4.46 | 0.136 | 235 | 19 | 0 | 37 | 16 | 2006-12-16 18:09:00 |
16/12/2006 | 18:10:00 | 3.4 | 0.148 | 236 | 15 | 0 | 22 | 18 | 2006-12-16 18:10:00 |
16/12/2006 | 18:11:00 | 3.09 | 0.152 | 237 | 13.8 | 0 | 12 | 17 | 2006-12-16 18:11:00 |
16/12/2006 | 18:12:00 | 3.73 | 0.144 | 236 | 16.4 | 0 | 27 | 17 | 2006-12-16 18:12:00 |
16/12/2006 | 18:13:00 | 2.31 | 0.16 | 237 | 9.6 | 0 | 1 | 17 | 2006-12-16 18:13:00 |
16/12/2006 | 18:14:00 | 2.39 | 0.158 | 237 | 10 | 0 | 1 | 17 | 2006-12-16 18:14:00 |
16/12/2006 | 18:15:00 | 4.6 | 0.1 | 234 | 21.4 | 0 | 20 | 17 | 2006-12-16 18:15:00 |
16/12/2006 | 18:16:00 | 4.52 | 0.076 | 234 | 19.6 | 0 | 9 | 17 | 2006-12-16 18:16:00 |
16/12/2006 | 18:17:00 | 4.2 | 0.082 | 234 | 17.8 | 0 | 1 | 17 | 2006-12-16 18:17:00 |
16/12/2006 | 18:18:00 | 4.47 | 0 | 233 | 19.2 | 0 | 1 | 16 | 2006-12-16 18:18:00 |
16/12/2006 | 18:19:00 | 2.85 | 0 | 236 | 12 | 0 | 1 | 17 | 2006-12-16 18:19:00 |
16/12/2006 | 18:20:00 | 2.93 | 0 | 235 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:20:00 |
16/12/2006 | 18:21:00 | 2.94 | 0 | 236 | 12.4 | 0 | 2 | 17 | 2006-12-16 18:21:00 |
16/12/2006 | 18:22:00 | 2.93 | 0 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:22:00 |
16/12/2006 | 18:23:00 | 2.93 | 0 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:23:00 |
16/12/2006 | 18:24:00 | 3.45 | 0 | 235 | 15.2 | 0 | 1 | 17 | 2006-12-16 18:24:00 |
16/12/2006 | 18:25:00 | 4.87 | 0 | 234 | 20.8 | 0 | 1 | 17 | 2006-12-16 18:25:00 |
16/12/2006 | 18:26:00 | 4.87 | 0 | 234 | 20.8 | 0 | 1 | 17 | 2006-12-16 18:26:00 |
16/12/2006 | 18:27:00 | 4.87 | 0 | 234 | 20.8 | 0 | 1 | 17 | 2006-12-16 18:27:00 |
16/12/2006 | 18:28:00 | 3.18 | 0 | 236 | 13.8 | 0 | 1 | 17 | 2006-12-16 18:28:00 |
16/12/2006 | 18:29:00 | 2.92 | 0 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:29:00 |
16/12/2006 | 18:30:00 | 2.93 | 0 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:30:00 |
16/12/2006 | 18:31:00 | 2.91 | 0.05 | 236 | 12.4 | 0 | 1 | 17 | 2006-12-16 18:31:00 |
16/12/2006 | 18:32:00 | 2.61 | 0.052 | 235 | 11 | 0 | 1 | 17 | 2006-12-16 18:32:00 |
16/12/2006 | 18:33:00 | 2.71 | 0.162 | 235 | 11.6 | 0 | 0 | 17 | 2006-12-16 18:33:00 |
16/12/2006 | 18:34:00 | 3.54 | 0.086 | 234 | 15.6 | 0 | 1 | 16 | 2006-12-16 18:34:00 |
16/12/2006 | 18:35:00 | 6.07 | 0 | 232 | 26.4 | 0 | 27 | 17 | 2006-12-16 18:35:00 |
16/12/2006 | 18:36:00 | 4.54 | 0 | 234 | 19.4 | 0 | 1 | 17 | 2006-12-16 18:36:00 |
16/12/2006 | 18:37:00 | 4.41 | 0 | 232 | 18.8 | 0 | 1 | 16 | 2006-12-16 18:37:00 |
16/12/2006 | 18:38:00 | 2.91 | 0.048 | 234 | 13 | 0 | 1 | 17 | 2006-12-16 18:38:00 |
16/12/2006 | 18:39:00 | 2.33 | 0.054 | 235 | 9.8 | 0 | 1 | 17 | 2006-12-16 18:39:00 |
16/12/2006 | 18:40:00 | 2.26 | 0.054 | 235 | 9.6 | 0 | 1 | 17 | 2006-12-16 18:40:00 |
16/12/2006 | 18:41:00 | 2.27 | 0.054 | 235 | 9.6 | 0 | 1 | 17 | 2006-12-16 18:41:00 |
16/12/2006 | 18:42:00 | 2.26 | 0.054 | 235 | 9.6 | 0 | 1 | 17 | 2006-12-16 18:42:00 |
16/12/2006 | 18:43:00 | 2.19 | 0.068 | 236 | 9.2 | 0 | 1 | 17 | 2006-12-16 18:43:00 |
16/12/2006 | 18:44:00 | 2.98 | 0.166 | 235 | 13.2 | 0 | 1 | 17 | 2006-12-16 18:44:00 |
16/12/2006 | 18:45:00 | 4.2 | 0.174 | 234 | 17.8 | 0 | 1 | 17 | 2006-12-16 18:45:00 |
16/12/2006 | 18:46:00 | 4.2 | 0.186 | 234 | 17.8 | 0 | 1 | 16 | 2006-12-16 18:46:00 |
16/12/2006 | 18:47:00 | 4.22 | 0.178 | 234 | 18 | 0 | 1 | 17 | 2006-12-16 18:47:00 |
16/12/2006 | 18:48:00 | 2.79 | 0.188 | 235 | 12 | 0 | 2 | 17 | 2006-12-16 18:48:00 |
16/12/2006 | 18:49:00 | 2.54 | 0.088 | 235 | 10.8 | 0 | 4 | 17 | 2006-12-16 18:49:00 |
16/12/2006 | 18:50:00 | 2.5 | 0.08 | 234 | 10.6 | 0 | 3 | 17 | 2006-12-16 18:50:00 |
16/12/2006 | 18:51:00 | 2.34 | 0.07 | 234 | 10 | 0 | 1 | 16 | 2006-12-16 18:51:00 |
16/12/2006 | 18:52:00 | 2.32 | 0 | 233 | 9.8 | 0 | 0 | 17 | 2006-12-16 18:52:00 |
16/12/2006 | 18:53:00 | 2.45 | 0 | 234 | 10.6 | 0 | 1 | 17 | 2006-12-16 18:53:00 |
16/12/2006 | 18:54:00 | 4.3 | 0 | 232 | 18.4 | 0 | 1 | 16 | 2006-12-16 18:54:00 |
16/12/2006 | 18:55:00 | 4.23 | 0.09 | 232 | 18.2 | 0 | 1 | 17 | 2006-12-16 18:55:00 |
16/12/2006 | 18:56:00 | 4.23 | 0.09 | 232 | 18.2 | 0 | 2 | 16 | 2006-12-16 18:56:00 |
16/12/2006 | 18:57:00 | 3.92 | 0.084 | 233 | 17 | 0 | 1 | 17 | 2006-12-16 18:57:00 |
16/12/2006 | 18:58:00 | 4.22 | 0.09 | 232 | 18 | 0 | 1 | 17 | 2006-12-16 18:58:00 |
16/12/2006 | 18:59:00 | 4.22 | 0.09 | 232 | 18.2 | 0 | 1 | 16 | 2006-12-16 18:59:00 |
16/12/2006 | 19:00:00 | 4.07 | 0.088 | 232 | 17.4 | 0 | 1 | 17 | 2006-12-16 19:00:00 |
16/12/2006 | 19:01:00 | 3.61 | 0.09 | 232 | 15.6 | 0 | 2 | 16 | 2006-12-16 19:01:00 |
16/12/2006 | 19:02:00 | 3.46 | 0.09 | 233 | 14.8 | 0 | 1 | 17 | 2006-12-16 19:02:00 |
16/12/2006 | 19:03:00 | 3.43 | 0.09 | 232 | 14.8 | 0 | 1 | 16 | 2006-12-16 19:03:00 |
Date | Time | Global_active_power | Global_reactive_power | Voltage | Global_intensity | Sub_metering_1 | Sub_metering_2 | Sub_metering_3 | datetime |
---|---|---|---|---|---|---|---|---|---|
16/12/2006 | 17:24:00 | 0.263 | 0.792 | 0.0162 | 0.277 | NaN | 0.027 | 0.0556 | 2006-12-16 17:24:00 |
16/12/2006 | 17:25:00 | 0.412 | 0.826 | 0.0111 | 0.416 | NaN | 0.027 | 0 | 2006-12-16 17:25:00 |
16/12/2006 | 17:26:00 | 0.413 | 0.943 | 0.00969 | 0.416 | NaN | 0.0541 | 0.0556 | 2006-12-16 17:26:00 |
16/12/2006 | 17:27:00 | 0.415 | 0.951 | 0.0116 | 0.416 | NaN | 0.027 | 0.0556 | 2006-12-16 17:27:00 |
16/12/2006 | 17:28:00 | 0.192 | 1 | 0.0197 | 0.199 | NaN | 0.027 | 0.0556 | 2006-12-16 17:28:00 |
16/12/2006 | 17:29:00 | 0.173 | 0.989 | 0.017 | 0.175 | NaN | 0.0541 | 0.0556 | 2006-12-16 17:29:00 |
16/12/2006 | 17:30:00 | 0.196 | 0.985 | 0.0172 | 0.199 | NaN | 0.027 | 0.0556 | 2006-12-16 17:30:00 |
16/12/2006 | 17:31:00 | 0.196 | 0.985 | 0.0178 | 0.199 | NaN | 0.027 | 0.0556 | 2006-12-16 17:31:00 |
16/12/2006 | 17:32:00 | 0.192 | 0.966 | 0.0126 | 0.199 | NaN | 0.027 | 0.0556 | 2006-12-16 17:32:00 |
16/12/2006 | 17:33:00 | 0.191 | 0.966 | 0.0121 | 0.199 | NaN | 0.0541 | 0 | 2006-12-16 17:33:00 |
16/12/2006 | 17:34:00 | 0.293 | 0.943 | 0.00789 | 0.313 | NaN | 0.027 | 0.0556 | 2006-12-16 17:34:00 |
16/12/2006 | 17:35:00 | 0.418 | 0.89 | 0.00755 | 0.422 | NaN | 0.027 | 0.0556 | 2006-12-16 17:35:00 |
16/12/2006 | 17:36:00 | 0.394 | 0.905 | 0.00844 | 0.398 | NaN | 0.027 | 0 | 2006-12-16 17:36:00 |
16/12/2006 | 17:37:00 | 0.4 | 0.754 | 0.0081 | 0.404 | NaN | 0.0541 | 0.0556 | 2006-12-16 17:37:00 |
16/12/2006 | 17:38:00 | 0.242 | 0.799 | 0.0179 | 0.253 | NaN | 0.027 | 0.0556 | 2006-12-16 17:38:00 |
16/12/2006 | 17:39:00 | 0.155 | 0.534 | 0.0259 | 0.151 | NaN | 0 | 0.0556 | 2006-12-16 17:39:00 |
16/12/2006 | 17:40:00 | 0.14 | 0.288 | 0.0241 | 0.139 | NaN | 0 | 0.0556 | 2006-12-16 17:40:00 |
16/12/2006 | 17:41:00 | 0.161 | 0.295 | 0.0255 | 0.157 | NaN | 0 | 0.0556 | 2006-12-16 17:41:00 |
16/12/2006 | 17:42:00 | 0.14 | 0 | 0.0258 | 0.139 | NaN | 0 | 0.111 | 2006-12-16 17:42:00 |
16/12/2006 | 17:43:00 | 0.2 | 0 | 0.0204 | 0.217 | NaN | 0 | 0.0556 | 2006-12-16 17:43:00 |
16/12/2006 | 17:44:00 | 0.481 | 0 | 0.00718 | 0.488 | NaN | 0 | 0 | 2006-12-16 17:44:00 |
16/12/2006 | 17:45:00 | 0.716 | 0 | 0 | 0.723 | NaN | 0 | 0.0556 | 2006-12-16 17:45:00 |
16/12/2006 | 17:46:00 | 0.628 | 0 | 0.00516 | 0.645 | NaN | 0 | 0 | 2006-12-16 17:46:00 |
16/12/2006 | 17:47:00 | 0.387 | 0 | 0.0135 | 0.386 | NaN | 0 | 0.0556 | 2006-12-16 17:47:00 |
16/12/2006 | 17:48:00 | 0.297 | 0 | 0.0167 | 0.307 | NaN | 0 | 0.0556 | 2006-12-16 17:48:00 |
16/12/2006 | 17:49:00 | 0.138 | 0 | 0.0238 | 0.133 | NaN | 0 | 0.0556 | 2006-12-16 17:49:00 |
16/12/2006 | 17:50:00 | 0.136 | 0 | 0.0204 | 0.133 | NaN | 0 | 0.0556 | 2006-12-16 17:50:00 |
16/12/2006 | 17:51:00 | 0.135 | 0 | 0.0194 | 0.133 | NaN | 0 | 0.0556 | 2006-12-16 17:51:00 |
16/12/2006 | 17:52:00 | 0.139 | 0 | 0.0189 | 0.139 | NaN | 0 | 0.0556 | 2006-12-16 17:52:00 |
16/12/2006 | 17:53:00 | 0.128 | 0 | 0.018 | 0.127 | NaN | 0 | 0.0556 | 2006-12-16 17:53:00 |
16/12/2006 | 17:54:00 | 0.069 | 0 | 0.0171 | 0.0723 | NaN | 0 | 0.0556 | 2006-12-16 17:54:00 |
16/12/2006 | 17:55:00 | 0.204 | 0.144 | 0.0134 | 0.217 | NaN | 0 | 0.0556 | 2006-12-16 17:55:00 |
16/12/2006 | 17:56:00 | 0.28 | 0.17 | 0.0117 | 0.277 | NaN | 0 | 0 | 2006-12-16 17:56:00 |
16/12/2006 | 17:57:00 | 0.302 | 0 | 0.0111 | 0.301 | NaN | 0 | 0.0556 | 2006-12-16 17:57:00 |
16/12/2006 | 17:58:00 | 0.243 | 0.379 | 0.0155 | 0.253 | NaN | 0 | 0.0556 | 2006-12-16 17:58:00 |
16/12/2006 | 17:59:00 | 0.0369 | 0.11 | 0.025 | 0.0361 | NaN | 0 | 0.0556 | 2006-12-16 17:59:00 |
16/12/2006 | 18:00:00 | 0.0781 | 0.341 | 0.0274 | 0.0783 | NaN | 0 | 0.111 | 2006-12-16 18:00:00 |
16/12/2006 | 18:01:00 | 0.0566 | 0.273 | 0.0303 | 0.0542 | NaN | 0 | 0.0556 | 2006-12-16 18:01:00 |
16/12/2006 | 18:02:00 | 0.0758 | 0.223 | 0.0306 | 0.0723 | NaN | 0 | 0.0556 | 2006-12-16 18:02:00 |
16/12/2006 | 18:03:00 | 0.201 | 0.205 | 0.025 | 0.217 | NaN | 0.432 | 0.111 | 2006-12-16 18:03:00 |
16/12/2006 | 18:04:00 | 0.356 | 0.383 | 0.0169 | 0.355 | NaN | 1 | 0 | 2006-12-16 18:04:00 |
16/12/2006 | 18:05:00 | 0.501 | 0.364 | 0.00818 | 0.512 | NaN | 1 | 0.0556 | 2006-12-16 18:05:00 |
16/12/2006 | 18:06:00 | 0.592 | 0.352 | 0.00478 | 0.596 | NaN | 0.973 | 0.0556 | 2006-12-16 18:06:00 |
16/12/2006 | 18:07:00 | 0.556 | 0.273 | 0.00365 | 0.56 | NaN | 1 | 0 | 2006-12-16 18:07:00 |
16/12/2006 | 18:08:00 | 0.535 | 0.22 | 0.00533 | 0.536 | NaN | 0.973 | 0.0556 | 2006-12-16 18:08:00 |
16/12/2006 | 18:09:00 | 0.295 | 0.258 | 0.0154 | 0.295 | NaN | 1 | 0 | 2006-12-16 18:09:00 |
16/12/2006 | 18:10:00 | 0.157 | 0.28 | 0.0219 | 0.175 | NaN | 0.595 | 0.111 | 2006-12-16 18:10:00 |
16/12/2006 | 18:11:00 | 0.117 | 0.288 | 0.0256 | 0.139 | NaN | 0.324 | 0.0556 | 2006-12-16 18:11:00 |
16/12/2006 | 18:12:00 | 0.2 | 0.273 | 0.0201 | 0.217 | NaN | 0.73 | 0.0556 | 2006-12-16 18:12:00 |
16/12/2006 | 18:13:00 | 0.0156 | 0.303 | 0.0271 | 0.012 | NaN | 0.027 | 0.0556 | 2006-12-16 18:13:00 |
16/12/2006 | 18:14:00 | 0.026 | 0.299 | 0.0264 | 0.0241 | NaN | 0.027 | 0.0556 | 2006-12-16 18:14:00 |
16/12/2006 | 18:15:00 | 0.313 | 0.189 | 0.0137 | 0.367 | NaN | 0.541 | 0.0556 | 2006-12-16 18:15:00 |
16/12/2006 | 18:16:00 | 0.303 | 0.144 | 0.0135 | 0.313 | NaN | 0.243 | 0.0556 | 2006-12-16 18:16:00 |
16/12/2006 | 18:17:00 | 0.261 | 0.155 | 0.014 | 0.259 | NaN | 0.027 | 0.0556 | 2006-12-16 18:17:00 |
16/12/2006 | 18:18:00 | 0.296 | 0 | 0.00969 | 0.301 | NaN | 0.027 | 0 | 2006-12-16 18:18:00 |
16/12/2006 | 18:19:00 | 0.0862 | 0 | 0.0194 | 0.0843 | NaN | 0.027 | 0.0556 | 2006-12-16 18:19:00 |
16/12/2006 | 18:20:00 | 0.096 | 0 | 0.0179 | 0.0964 | NaN | 0.027 | 0.0556 | 2006-12-16 18:20:00 |
16/12/2006 | 18:21:00 | 0.0976 | 0 | 0.0212 | 0.0964 | NaN | 0.0541 | 0.0556 | 2006-12-16 18:21:00 |
16/12/2006 | 18:22:00 | 0.0968 | 0 | 0.019 | 0.0964 | NaN | 0.027 | 0.0556 | 2006-12-16 18:22:00 |
16/12/2006 | 18:23:00 | 0.0958 | 0 | 0.0197 | 0.0964 | NaN | 0.027 | 0.0556 | 2006-12-16 18:23:00 |
16/12/2006 | 18:24:00 | 0.164 | 0 | 0.0177 | 0.181 | NaN | 0.027 | 0.0556 | 2006-12-16 18:24:00 |
16/12/2006 | 18:25:00 | 0.348 | 0 | 0.0116 | 0.349 | NaN | 0.027 | 0.0556 | 2006-12-16 18:25:00 |
16/12/2006 | 18:26:00 | 0.348 | 0 | 0.012 | 0.349 | NaN | 0.027 | 0.0556 | 2006-12-16 18:26:00 |
16/12/2006 | 18:27:00 | 0.348 | 0 | 0.0118 | 0.349 | NaN | 0.027 | 0.0556 | 2006-12-16 18:27:00 |
16/12/2006 | 18:28:00 | 0.128 | 0 | 0.019 | 0.139 | NaN | 0.027 | 0.0556 | 2006-12-16 18:28:00 |
16/12/2006 | 18:29:00 | 0.095 | 0 | 0.0204 | 0.0964 | NaN | 0.027 | 0.0556 | 2006-12-16 18:29:00 |
16/12/2006 | 18:30:00 | 0.0963 | 0 | 0.0217 | 0.0964 | NaN | 0.027 | 0.0556 | 2006-12-16 18:30:00 |
16/12/2006 | 18:31:00 | 0.094 | 0.0947 | 0.0203 | 0.0964 | NaN | 0.027 | 0.0556 | 2006-12-16 18:31:00 |
16/12/2006 | 18:32:00 | 0.0545 | 0.0985 | 0.0186 | 0.0542 | NaN | 0.027 | 0.0556 | 2006-12-16 18:32:00 |
16/12/2006 | 18:33:00 | 0.0683 | 0.307 | 0.0161 | 0.0723 | NaN | 0 | 0.0556 | 2006-12-16 18:33:00 |
16/12/2006 | 18:34:00 | 0.175 | 0.163 | 0.0117 | 0.193 | NaN | 0.027 | 0 | 2006-12-16 18:34:00 |
16/12/2006 | 18:35:00 | 0.504 | 0 | 0.0063 | 0.518 | NaN | 0.73 | 0.0556 | 2006-12-16 18:35:00 |
16/12/2006 | 18:36:00 | 0.305 | 0 | 0.0107 | 0.307 | NaN | 0.027 | 0.0556 | 2006-12-16 18:36:00 |
16/12/2006 | 18:37:00 | 0.288 | 0 | 0.00562 | 0.289 | NaN | 0.027 | 0 | 2006-12-16 18:37:00 |
16/12/2006 | 18:38:00 | 0.094 | 0.0909 | 0.0128 | 0.114 | NaN | 0.027 | 0.0556 | 2006-12-16 18:38:00 |
16/12/2006 | 18:39:00 | 0.0179 | 0.102 | 0.0159 | 0.0181 | NaN | 0.027 | 0.0556 | 2006-12-16 18:39:00 |
16/12/2006 | 18:40:00 | 0.00986 | 0.102 | 0.0155 | 0.012 | NaN | 0.027 | 0.0556 | 2006-12-16 18:40:00 |
16/12/2006 | 18:41:00 | 0.0106 | 0.102 | 0.018 | 0.012 | NaN | 0.027 | 0.0556 | 2006-12-16 18:41:00 |
16/12/2006 | 18:42:00 | 0.00908 | 0.102 | 0.0174 | 0.012 | NaN | 0.027 | 0.0556 | 2006-12-16 18:42:00 |
16/12/2006 | 18:43:00 | 0 | 0.129 | 0.0202 | 0 | NaN | 0.027 | 0.0556 | 2006-12-16 18:43:00 |
16/12/2006 | 18:44:00 | 0.103 | 0.314 | 0.0161 | 0.12 | NaN | 0.027 | 0.0556 | 2006-12-16 18:44:00 |
16/12/2006 | 18:45:00 | 0.261 | 0.33 | 0.0143 | 0.259 | NaN | 0.027 | 0.0556 | 2006-12-16 18:45:00 |
16/12/2006 | 18:46:00 | 0.262 | 0.352 | 0.0135 | 0.259 | NaN | 0.027 | 0 | 2006-12-16 18:46:00 |
16/12/2006 | 18:47:00 | 0.263 | 0.337 | 0.0126 | 0.265 | NaN | 0.027 | 0.0556 | 2006-12-16 18:47:00 |
16/12/2006 | 18:48:00 | 0.0776 | 0.356 | 0.0168 | 0.0843 | NaN | 0.0541 | 0.0556 | 2006-12-16 18:48:00 |
16/12/2006 | 18:49:00 | 0.0457 | 0.167 | 0.0155 | 0.0482 | NaN | 0.108 | 0.0556 | 2006-12-16 18:49:00 |
16/12/2006 | 18:50:00 | 0.04 | 0.152 | 0.0123 | 0.0422 | NaN | 0.0811 | 0.0556 | 2006-12-16 18:50:00 |
16/12/2006 | 18:51:00 | 0.0192 | 0.133 | 0.0106 | 0.0241 | NaN | 0.027 | 0 | 2006-12-16 18:51:00 |
16/12/2006 | 18:52:00 | 0.0174 | 0 | 0.0103 | 0.0181 | NaN | 0 | 0.0556 | 2006-12-16 18:52:00 |
16/12/2006 | 18:53:00 | 0.0337 | 0 | 0.0112 | 0.0422 | NaN | 0.027 | 0.0556 | 2006-12-16 18:53:00 |
16/12/2006 | 18:54:00 | 0.274 | 0 | 0.00592 | 0.277 | NaN | 0.027 | 0 | 2006-12-16 18:54:00 |
16/12/2006 | 18:55:00 | 0.265 | 0.17 | 0.00533 | 0.271 | NaN | 0.027 | 0.0556 | 2006-12-16 18:55:00 |
16/12/2006 | 18:56:00 | 0.265 | 0.17 | 0.00562 | 0.271 | NaN | 0.0541 | 0 | 2006-12-16 18:56:00 |
16/12/2006 | 18:57:00 | 0.225 | 0.159 | 0.0076 | 0.235 | NaN | 0.027 | 0.0556 | 2006-12-16 18:57:00 |
16/12/2006 | 18:58:00 | 0.263 | 0.17 | 0.00466 | 0.265 | NaN | 0.027 | 0.0556 | 2006-12-16 18:58:00 |
16/12/2006 | 18:59:00 | 0.264 | 0.17 | 0.00411 | 0.271 | NaN | 0.027 | 0 | 2006-12-16 18:59:00 |
16/12/2006 | 19:00:00 | 0.244 | 0.167 | 0.00424 | 0.247 | NaN | 0.027 | 0.0556 | 2006-12-16 19:00:00 |
16/12/2006 | 19:01:00 | 0.185 | 0.17 | 0.00579 | 0.193 | NaN | 0.0541 | 0 | 2006-12-16 19:01:00 |
16/12/2006 | 19:02:00 | 0.165 | 0.17 | 0.00726 | 0.169 | NaN | 0.027 | 0.0556 | 2006-12-16 19:02:00 |
16/12/2006 | 19:03:00 | 0.162 | 0.17 | 0.00432 | 0.169 | NaN | 0.027 | 0 | 2006-12-16 19:03:00 |
datetime | Global_active_power | Voltage | Global_intensity | |
---|---|---|---|---|
1 | 2006-12-17T01:24:00Z | 0.263171554632754 | 0.0161994292429076 | 0.27710843373494 |
2 | 2006-12-17T01:25:00Z | 0.411627303399948 | 0.011121369817022 | 0.41566265060241 |
3 | 2006-12-17T01:26:00Z | 0.41344406955619 | 0.00969447708578144 | 0.41566265060241 |
4 | 2006-12-17T01:27:00Z | 0.415260835712432 | 0.0115830115830117 | 0.41566265060241 |
savebest = keras::callback_early_stopping(restore_best_weights = T, patience = 6)opt = optimizer_adam()model <- keras_model_sequential() %>% layer_conv_1d(input_shape = c(train_window, features), filters=21, kernel_size=1, strides = 1, activation = 'relu',name = 'conv-1d-1', padding = 'same') %>% #layer_batch_normalization(name = 'batchnorm') %>% #layer_activation_relu() %>% layer_lstm(21, name = 'lstm_layer', return_sequences = T, stateful = F) %>% layer_lstm(10, input_shape = c(train_window, features), name = 'lstm_layer_2', stateful = F) %>% layer_dense(10, activation = "linear", name = 'outputlayer')
summary(model)
compile(model, loss = 'MSE', optimizer = opt)#normally - we'd want to reset_states(), and use a 'stateful' LSTM - then manually loop through the splits, but R-studio HATES that. #Some debugging gets this to work with some effort as below. We also would prefer a layernorm layerfor(epoch in c(1:20)){ print(paste("Beginning epoch #", epoch)) keras::fit(model, x = data_prep_tf, y = data_prep_y_tf, epochs = 1, batch_size = 10, validation_data = list(x = data_test_tf,y = data_test_y_tf), shuffle=F, callbacks = savebest) model %>% reset_states() }model %>% save_model_hdf5("/Users/connor/Desktop/GithubProjects/Econometrics/524/EC524W20/lab/005-Perceptrons_and_NeuralNets/lstm_model_sf.hdf5")
What if we literally don't know anything about the data, just how each object relates to one another (think DNA structures)?
What if we literally don't know anything about the data, just how each object relates to one another (think DNA structures)?
GCN: Graph Convolutional Networks. These read in graph data, and using the techniques you saw for CNNs, can predict new patterns.
Oh yeah, remember this?
Oh yeah, remember this?
That was generated by a neural network. Called a GAN or Generative Adversarial Network, it is trained by having two neural networks duke it out.
One of the networks tries to imitate the hand-drawn pictures
The other network tries to detect compute-generated pictures.
This model is used in the often cited deep-fake videos.
Imagine Nicholas Cage was in every movie, playing every part.
The only difference between these is what data the model is trained on.
The only difference between these is what data the model is trained on.
But you basically understand how to do this yourself now.
Just by looking at the diagram for a while, and learning how convolutional neural nets work, you could figure this out.
These models are powerful.
These models are powerful.
However, they aren't interpretable (yet, and they're getting MUCH better at this every year.)
These models are powerful.
However, they aren't interpretable (yet, and they're getting MUCH better at this every year.)
They also use hundreds of thousands of parameters for even very simple models.
These models are powerful.
However, they aren't interpretable (yet, and they're getting MUCH better at this every year.)
They also use hundreds of thousands of parameters for even very simple models.
That means you have to be super careful in how you evaluate them.
These models are powerful.
However, they aren't interpretable (yet, and they're getting MUCH better at this every year.)
They also use hundreds of thousands of parameters for even very simple models.
That means you have to be super careful in how you evaluate them.
Keyboard shortcuts
↑, ←, Pg Up, k | Go to previous slide |
↓, →, Pg Dn, Space, j | Go to next slide |
Home | Go to first slide |
End | Go to last slide |
Number + Return | Go to specific slide |
b / m / f | Toggle blackout / mirrored / fullscreen mode |
c | Clone slideshow |
p | Toggle presenter mode |
t | Restart the presentation timer |
?, h | Toggle this help |
Esc | Back to slideshow |